gem_id
stringlengths 37
41
| paper_id
stringlengths 3
4
| paper_title
stringlengths 19
183
| paper_abstract
stringlengths 168
1.38k
| paper_content
sequence | paper_headers
sequence | slide_id
stringlengths 37
41
| slide_title
stringlengths 2
85
| slide_content_text
stringlengths 11
2.55k
| target
stringlengths 11
2.55k
| references
list |
---|---|---|---|---|---|---|---|---|---|---|
GEM-SciDuet-train-119#paper-1323#slide-15 | 1323 | Topic Models with Logical Constraints on Words | This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177
],
"paper_content_text": [
"Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.",
"When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.",
"Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.",
"Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .",
"For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.",
"Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.",
"We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.",
"In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.",
"However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?",
"Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.",
"In this paper, we address such logical expressions of links on LDA-DF framework.",
"Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.",
"At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.",
"This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.",
"LDA with Dirichlet Forest Priors We briefly review LDA-DF.",
"Let w := w 1 .",
".",
".",
"w n be a corpus consisting of D documents, where n is the total number of words in the documents.",
"Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.",
"Let T be the number of topics.",
"As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.",
"The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.",
"The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.",
"The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.",
"The trees assigned to topics z are denoted as q.",
"In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.",
"1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.",
"This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).",
"In the case of CLs, we use the following algorithm.",
"For examples, the algorithm creates the two trees in Fig.",
"1 (b) for the constraint CL(A, B) ∧ CL(A, C).",
"The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.",
"Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .",
"I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .",
"C z (s) represents the set of children of node s in tree q z .",
"γ (k) z represents a weight of the edge to node k in tree q z .",
"Additionally, we define ∑ S s := ∑ s∈S .",
"Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z ) , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .",
"|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.",
"After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.",
"θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).",
"We denote it as (∧,∨,¬)-expressions.",
"Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.",
"Interpretation of negations is discussed in Sec.",
"3.4.",
"(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.",
"The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.",
"One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.",
"1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.",
"1(b) .",
"The right tree of Fig.",
"1(b) is created by Np(B) ∧ Np(C).",
"Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.",
"1.",
"Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).",
"2.",
"Calculate the minimum DNF of the primitives.",
"3.",
"Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.",
"Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.",
"1.",
"We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .",
"In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.",
"Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.",
"We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.",
"Definition 1 (Asymptotic Topic Family).",
"For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .",
"ATF expresses all combinations of words that can occur in a topic when η is large.",
"In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.",
"As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.",
"Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).",
"If you want to remove {B} and {C}, you can use exclusive disjunctions.",
"For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.",
"The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.",
"Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.",
"imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .",
"Proof.",
"For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.",
"We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .",
"When |ℓ| = 1, f * ℓ = U ℓ is trivial.",
"Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.",
"In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.",
"By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.",
"This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.",
"Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.",
"This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.",
"Definition 3 (Asymptotic Equivalence Relation).",
"Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .",
"We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .",
"The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.",
"Proposition 4.",
"For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.",
"We prove (a) only.",
"We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.",
"In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.",
"The experimental result shown in Tab.",
"1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.",
"Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.",
"Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).",
"In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.",
"4. with respect to W = {A, B} is {∅, {A, B}, {B}}.",
"Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .",
"However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .",
"The asymptotic equivalency can fulfill the anticipation with the next proposition.",
"This simultaneously suggests that our definition is semantically valid.",
"IL(B, A) ≍ ML(A, B) Proof.",
"From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.",
"This informally means ∧ n i=1 X i → Y as an extension of A → B.",
"In this case, we set Proposition 5.",
"For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).",
"When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).",
"This is easier than considering CLs between highfrequency words and unnecessary words as described in ).",
"Negation of Links There are two types of interpretation for negation of links.",
"One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".",
"We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.",
"We consider the strong negation in this study.",
"According to Def.",
"1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .",
"However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.",
"Thus we define it to be fit in strong negation as follows.",
"Definition 6 (ATF of strong negation of links).",
"Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .",
"Note that the definition is used not for primitives but for links.",
"Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.",
"The next proposition gives the way to remove the negation of each link treated in this study.",
"We define no constraint condition as ϵ for the result of ISL.",
"Proposition 7.",
"For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.",
"We prove (a) only.",
"(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .",
"We set topic size as T = 2.",
"The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.",
"We abbreviate the grouping type as AB|AC.",
"In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.",
"Thus, we naively classify a grouping type of each result into the four types.",
"Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.",
"Fig.",
"2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.",
"The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.",
"Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).",
"The result of (1) is the same result as LDA, because of no constraints.",
"In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.",
"As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.",
"The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.",
"Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.",
"The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).",
"The reason is that (5) allows A to appear with C, while (4) does not.",
"In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.",
"Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.",
"Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .",
"In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.",
"We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.",
"2.",
"To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.",
"After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.",
"Each topic label is determined by looking carefully at highfrequency words in the topic.",
"To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.",
"However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.",
"Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.",
"We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.",
"After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.",
"Note that our solution is not ad-hoc, and we can easily apply it to similar problems.",
"Conclusions We proposed a simple method to achieve topic models with logical constraints on words.",
"Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.",
"As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.",
"We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.",
"In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.",
"Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.",
"Topic High frequency words in each topic ?",
"have give night film turn performance ?",
"not life have own first only family tell ?",
"movie have n't get good not see ?",
"have black scene tom death die joe ?",
"film have n't not make out well see Isolated have film movie not good make n't ?",
"star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?",
"science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.",
"In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"LDA with Dirichlet Forest Priors",
"Logical Constraints on Words",
"(∧,∨)-expressions of Links",
"Shrinking Dirichlet Forests",
"Customizing New Links",
"Negation of Links",
"Comparison on a Synthetic Corpus",
"Interactive Topic Analysis",
"Conclusions"
]
} | GEM-SciDuet-train-119#paper-1323#slide-15 | Customization of new links | X1,,Xn do not appear (nearly)
(Remove unnecessary words and stop words
ISL( X Xn i1ZeroPrim(Xi)
B appears if A appears in a topic (AB)
(Use when B has multiple meanings)
IL( A ,B) EqualPrim(A,B)ZeroPrim(A)
Y appears if X1,,Xn appear in a topic (X1,,XnY)
XIL( X XY n EqualPrim( XY n i i
n ZeroPrim( X i i | X1,,Xn do not appear (nearly)
(Remove unnecessary words and stop words
ISL( X Xn i1ZeroPrim(Xi)
B appears if A appears in a topic (AB)
(Use when B has multiple meanings)
IL( A ,B) EqualPrim(A,B)ZeroPrim(A)
Y appears if X1,,Xn appear in a topic (X1,,XnY)
XIL( X XY n EqualPrim( XY n i i
n ZeroPrim( X i i | [] |
GEM-SciDuet-train-119#paper-1323#slide-16 | 1323 | Topic Models with Logical Constraints on Words | This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177
],
"paper_content_text": [
"Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.",
"When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.",
"Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.",
"Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .",
"For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.",
"Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.",
"We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.",
"In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.",
"However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?",
"Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.",
"In this paper, we address such logical expressions of links on LDA-DF framework.",
"Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.",
"At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.",
"This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.",
"LDA with Dirichlet Forest Priors We briefly review LDA-DF.",
"Let w := w 1 .",
".",
".",
"w n be a corpus consisting of D documents, where n is the total number of words in the documents.",
"Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.",
"Let T be the number of topics.",
"As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.",
"The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.",
"The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.",
"The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.",
"The trees assigned to topics z are denoted as q.",
"In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.",
"1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.",
"This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).",
"In the case of CLs, we use the following algorithm.",
"For examples, the algorithm creates the two trees in Fig.",
"1 (b) for the constraint CL(A, B) ∧ CL(A, C).",
"The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.",
"Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .",
"I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .",
"C z (s) represents the set of children of node s in tree q z .",
"γ (k) z represents a weight of the edge to node k in tree q z .",
"Additionally, we define ∑ S s := ∑ s∈S .",
"Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z ) , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .",
"|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.",
"After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.",
"θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).",
"We denote it as (∧,∨,¬)-expressions.",
"Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.",
"Interpretation of negations is discussed in Sec.",
"3.4.",
"(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.",
"The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.",
"One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.",
"1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.",
"1(b) .",
"The right tree of Fig.",
"1(b) is created by Np(B) ∧ Np(C).",
"Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.",
"1.",
"Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).",
"2.",
"Calculate the minimum DNF of the primitives.",
"3.",
"Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.",
"Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.",
"1.",
"We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .",
"In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.",
"Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.",
"We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.",
"Definition 1 (Asymptotic Topic Family).",
"For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .",
"ATF expresses all combinations of words that can occur in a topic when η is large.",
"In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.",
"As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.",
"Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).",
"If you want to remove {B} and {C}, you can use exclusive disjunctions.",
"For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.",
"The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.",
"Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.",
"imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .",
"Proof.",
"For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.",
"We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .",
"When |ℓ| = 1, f * ℓ = U ℓ is trivial.",
"Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.",
"In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.",
"By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.",
"This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.",
"Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.",
"This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.",
"Definition 3 (Asymptotic Equivalence Relation).",
"Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .",
"We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .",
"The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.",
"Proposition 4.",
"For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.",
"We prove (a) only.",
"We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.",
"In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.",
"The experimental result shown in Tab.",
"1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.",
"Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.",
"Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).",
"In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.",
"4. with respect to W = {A, B} is {∅, {A, B}, {B}}.",
"Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .",
"However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .",
"The asymptotic equivalency can fulfill the anticipation with the next proposition.",
"This simultaneously suggests that our definition is semantically valid.",
"IL(B, A) ≍ ML(A, B) Proof.",
"From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.",
"This informally means ∧ n i=1 X i → Y as an extension of A → B.",
"In this case, we set Proposition 5.",
"For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).",
"When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).",
"This is easier than considering CLs between highfrequency words and unnecessary words as described in ).",
"Negation of Links There are two types of interpretation for negation of links.",
"One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".",
"We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.",
"We consider the strong negation in this study.",
"According to Def.",
"1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .",
"However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.",
"Thus we define it to be fit in strong negation as follows.",
"Definition 6 (ATF of strong negation of links).",
"Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .",
"Note that the definition is used not for primitives but for links.",
"Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.",
"The next proposition gives the way to remove the negation of each link treated in this study.",
"We define no constraint condition as ϵ for the result of ISL.",
"Proposition 7.",
"For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.",
"We prove (a) only.",
"(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .",
"We set topic size as T = 2.",
"The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.",
"We abbreviate the grouping type as AB|AC.",
"In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.",
"Thus, we naively classify a grouping type of each result into the four types.",
"Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.",
"Fig.",
"2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.",
"The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.",
"Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).",
"The result of (1) is the same result as LDA, because of no constraints.",
"In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.",
"As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.",
"The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.",
"Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.",
"The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).",
"The reason is that (5) allows A to appear with C, while (4) does not.",
"In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.",
"Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.",
"Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .",
"In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.",
"We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.",
"2.",
"To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.",
"After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.",
"Each topic label is determined by looking carefully at highfrequency words in the topic.",
"To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.",
"However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.",
"Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.",
"We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.",
"After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.",
"Note that our solution is not ad-hoc, and we can easily apply it to similar problems.",
"Conclusions We proposed a simple method to achieve topic models with logical constraints on words.",
"Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.",
"As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.",
"We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.",
"In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.",
"Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.",
"Topic High frequency words in each topic ?",
"have give night film turn performance ?",
"not life have own first only family tell ?",
"movie have n't get good not see ?",
"have black scene tom death die joe ?",
"film have n't not make out well see Isolated have film movie not good make n't ?",
"star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?",
"science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.",
"In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"LDA with Dirichlet Forest Priors",
"Logical Constraints on Words",
"(∧,∨)-expressions of Links",
"Shrinking Dirichlet Forests",
"Customizing New Links",
"Negation of Links",
"Comparison on a Synthetic Corpus",
"Interactive Topic Analysis",
"Conclusions"
]
} | GEM-SciDuet-train-119#paper-1323#slide-16 | Interactive topic analysis | Topic High frequency words
have give night film turn performance year mother take out
not life have own first only family tell yet moment even
movie have nt get good not see know just other time make
have black scene tom death die joe ryan man final private
film have nt not make out well see just very watch even
have film original new never more evil nt time power
All topics are unclear
Movie review corpus (1000 reviews)
Isolate-Link(have, film, movie, not, nt)
Remove specified words as well as related unnecessary words
(Isolated) have film movie not good make nt character see more get
star war trek planet effect special lucas jedi science
Comedy comedy funny laugh school hilarious evil power bulworth
Disney disney voice mulan animated song feature tarzan animation
Family life love family mother woman father child relationship
Thriller truman murder killer death thriller carrey final detective
Star Wars and Star Trek are merged, although most topics are clear
Cannot-Link(jedi, trek) Dared to select jedi since
star and war are too common
Star Wars star war lucas effect jedi special matrix menace computer
Comedy funny comedy laugh get hilarious high joke humor bob smith
Disney disney truman voice toy show animation animated tarzan
Family family father mother boy child son parent wife performance
Thriller killer murder case lawyer man david prison performance
Star Trek disappears, altough Star Wars is obtained
Star Wars star war toy jedi menace phantom lucas burton planet
Star Trek alien effect star science special trek action computer
Comedy comedy funny laugh hilarious joke get ben john humor fun
Disney disney voice animated mulan animation family tarzan shrek
Family life love family man story child woman young mother
Thriller scream horror flynt murder killer lawyer death sequel case
We obtained Star Wars and Star Trek appropriately | Topic High frequency words
have give night film turn performance year mother take out
not life have own first only family tell yet moment even
movie have nt get good not see know just other time make
have black scene tom death die joe ryan man final private
film have nt not make out well see just very watch even
have film original new never more evil nt time power
All topics are unclear
Movie review corpus (1000 reviews)
Isolate-Link(have, film, movie, not, nt)
Remove specified words as well as related unnecessary words
(Isolated) have film movie not good make nt character see more get
star war trek planet effect special lucas jedi science
Comedy comedy funny laugh school hilarious evil power bulworth
Disney disney voice mulan animated song feature tarzan animation
Family life love family mother woman father child relationship
Thriller truman murder killer death thriller carrey final detective
Star Wars and Star Trek are merged, although most topics are clear
Cannot-Link(jedi, trek) Dared to select jedi since
star and war are too common
Star Wars star war lucas effect jedi special matrix menace computer
Comedy funny comedy laugh get hilarious high joke humor bob smith
Disney disney truman voice toy show animation animated tarzan
Family family father mother boy child son parent wife performance
Thriller killer murder case lawyer man david prison performance
Star Trek disappears, altough Star Wars is obtained
Star Wars star war toy jedi menace phantom lucas burton planet
Star Trek alien effect star science special trek action computer
Comedy comedy funny laugh hilarious joke get ben john humor fun
Disney disney voice animated mulan animation family tarzan shrek
Family life love family man story child woman young mother
Thriller scream horror flynt murder killer lawyer death sequel case
We obtained Star Wars and Star Trek appropriately | [] |
GEM-SciDuet-train-119#paper-1323#slide-17 | 1323 | Topic Models with Logical Constraints on Words | This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177
],
"paper_content_text": [
"Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.",
"When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.",
"Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.",
"Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .",
"For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.",
"Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.",
"We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.",
"In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.",
"However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?",
"Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.",
"In this paper, we address such logical expressions of links on LDA-DF framework.",
"Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.",
"At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.",
"This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.",
"LDA with Dirichlet Forest Priors We briefly review LDA-DF.",
"Let w := w 1 .",
".",
".",
"w n be a corpus consisting of D documents, where n is the total number of words in the documents.",
"Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.",
"Let T be the number of topics.",
"As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.",
"The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.",
"The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.",
"The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.",
"The trees assigned to topics z are denoted as q.",
"In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.",
"1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.",
"This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).",
"In the case of CLs, we use the following algorithm.",
"For examples, the algorithm creates the two trees in Fig.",
"1 (b) for the constraint CL(A, B) ∧ CL(A, C).",
"The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.",
"Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .",
"I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .",
"C z (s) represents the set of children of node s in tree q z .",
"γ (k) z represents a weight of the edge to node k in tree q z .",
"Additionally, we define ∑ S s := ∑ s∈S .",
"Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z ) , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .",
"|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.",
"After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.",
"θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).",
"We denote it as (∧,∨,¬)-expressions.",
"Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.",
"Interpretation of negations is discussed in Sec.",
"3.4.",
"(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.",
"The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.",
"One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.",
"1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.",
"1(b) .",
"The right tree of Fig.",
"1(b) is created by Np(B) ∧ Np(C).",
"Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.",
"1.",
"Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).",
"2.",
"Calculate the minimum DNF of the primitives.",
"3.",
"Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.",
"Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.",
"1.",
"We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .",
"In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.",
"Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.",
"We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.",
"Definition 1 (Asymptotic Topic Family).",
"For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .",
"ATF expresses all combinations of words that can occur in a topic when η is large.",
"In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.",
"As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.",
"Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).",
"If you want to remove {B} and {C}, you can use exclusive disjunctions.",
"For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.",
"The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.",
"Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.",
"imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .",
"Proof.",
"For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.",
"We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .",
"When |ℓ| = 1, f * ℓ = U ℓ is trivial.",
"Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.",
"In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.",
"By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.",
"This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.",
"Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.",
"This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.",
"Definition 3 (Asymptotic Equivalence Relation).",
"Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .",
"We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .",
"The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.",
"Proposition 4.",
"For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.",
"We prove (a) only.",
"We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.",
"In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.",
"The experimental result shown in Tab.",
"1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.",
"Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.",
"Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).",
"In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.",
"4. with respect to W = {A, B} is {∅, {A, B}, {B}}.",
"Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .",
"However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .",
"The asymptotic equivalency can fulfill the anticipation with the next proposition.",
"This simultaneously suggests that our definition is semantically valid.",
"IL(B, A) ≍ ML(A, B) Proof.",
"From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.",
"This informally means ∧ n i=1 X i → Y as an extension of A → B.",
"In this case, we set Proposition 5.",
"For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).",
"When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).",
"This is easier than considering CLs between highfrequency words and unnecessary words as described in ).",
"Negation of Links There are two types of interpretation for negation of links.",
"One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".",
"We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.",
"We consider the strong negation in this study.",
"According to Def.",
"1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .",
"However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.",
"Thus we define it to be fit in strong negation as follows.",
"Definition 6 (ATF of strong negation of links).",
"Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .",
"Note that the definition is used not for primitives but for links.",
"Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.",
"The next proposition gives the way to remove the negation of each link treated in this study.",
"We define no constraint condition as ϵ for the result of ISL.",
"Proposition 7.",
"For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.",
"We prove (a) only.",
"(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .",
"We set topic size as T = 2.",
"The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.",
"We abbreviate the grouping type as AB|AC.",
"In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.",
"Thus, we naively classify a grouping type of each result into the four types.",
"Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.",
"Fig.",
"2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.",
"The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.",
"Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).",
"The result of (1) is the same result as LDA, because of no constraints.",
"In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.",
"As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.",
"The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.",
"Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.",
"The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).",
"The reason is that (5) allows A to appear with C, while (4) does not.",
"In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.",
"Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.",
"Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .",
"In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.",
"We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.",
"2.",
"To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.",
"After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.",
"Each topic label is determined by looking carefully at highfrequency words in the topic.",
"To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.",
"However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.",
"Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.",
"We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.",
"After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.",
"Note that our solution is not ad-hoc, and we can easily apply it to similar problems.",
"Conclusions We proposed a simple method to achieve topic models with logical constraints on words.",
"Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.",
"As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.",
"We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.",
"In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.",
"Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.",
"Topic High frequency words in each topic ?",
"have give night film turn performance ?",
"not life have own first only family tell ?",
"movie have n't get good not see ?",
"have black scene tom death die joe ?",
"film have n't not make out well see Isolated have film movie not good make n't ?",
"star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?",
"science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.",
"In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"LDA with Dirichlet Forest Priors",
"Logical Constraints on Words",
"(∧,∨)-expressions of Links",
"Shrinking Dirichlet Forests",
"Customizing New Links",
"Negation of Links",
"Comparison on a Synthetic Corpus",
"Interactive Topic Analysis",
"Conclusions"
]
} | GEM-SciDuet-train-119#paper-1323#slide-17 | Conclusion | Simple algorithm for logical constraints on words for topic modeling
Must-Link(A,B) A and B appear in the same topic
Cannot-Link(A,B) A and B do not appear in the same topic
Theorem for the correctness of the algorithm
Customization of new links
Imply-Link(A, B): B appears if A appears in a topic
Comparative experiments on real corpora | Simple algorithm for logical constraints on words for topic modeling
Must-Link(A,B) A and B appear in the same topic
Cannot-Link(A,B) A and B do not appear in the same topic
Theorem for the correctness of the algorithm
Customization of new links
Imply-Link(A, B): B appears if A appears in a topic
Comparative experiments on real corpora | [] |
GEM-SciDuet-train-119#paper-1323#slide-18 | 1323 | Topic Models with Logical Constraints on Words | This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177
],
"paper_content_text": [
"Introduction Topic models such as Latent Dirichlet Allocation or LDA (Blei et al., 2003) are widely used to capture hidden topics in a corpus.",
"When we have domain knowledge of a target corpus, incorporating the knowledge into topic models would be useful in a practical sense.",
"Thus there have been many studies of semi-supervised extensions of topic models (Andrzejewski et al., 2007; Toutanova and Johnson, 2008; ), although topic models are often regarded as unsupervised learning.",
"Recently, ) developed a novel topic modeling framework, LDA with Dirichlet Forest priors (LDA-DF), which achieves two links Must-Link (ML) and Cannot-Link (CL) in the constrained clustering literature (Basu et al., 2008) .",
"For given words A and B, ML(A, B) and CL (A, B) are soft constraints that A and B must appear in the same topic, and that A and B cannot appear in the same topic, respectively.",
"Let us consider topic analysis of a corpus with movie reviews for illustrative purposes.",
"We know that two words 'jackie' (means Jackie Chan) and 'kung-fu' should appear in the same topic, while 'dicaprio' (means Leonardo DiCaprio) and 'kung-fu' should not appear in the same topic.",
"In this case, we can add constraints ML('jackie', 'kung-fu') and CL ('dicaprio', 'kung-fu') to smoothly conduct analysis.",
"However, what if there is a word 'bruce' (means Bruce Lee) in the corpus, and we want to distinguish between 'jackie' and 'bruce'?",
"Our full knowledge among 'kung-fu', 'jackie', and 'bruce' should be (ML('kung-fu', 'jackie') ∨ ML('kung-fu', 'bruce')) ∧ CL('bruce', 'jackie'), although the original framework does not allow a disjunction (∨) of links.",
"In this paper, we address such logical expressions of links on LDA-DF framework.",
"Combination between a probabilistic model and logical knowledge expressions such as Markov Logic Network (MLN) is recently getting a lot of attention (Riedel and Meza-Ruiz, 2008; Yu et al., 2008; Meza-Ruiz and Riedel, 2009; Yoshikawa et al., 2009; Poon and Domingos, 2009) , and our work can be regarded as on this research line.",
"At least, to our knowledge, our method is the first one that can directly incorporate logical knowledge into a prior for topic models without MLN.",
"This means the complexity of the inference in our method is essentially the same as in the original LDA-DF, despite that our method can broaden knowledge expressions.",
"LDA with Dirichlet Forest Priors We briefly review LDA-DF.",
"Let w := w 1 .",
".",
".",
"w n be a corpus consisting of D documents, where n is the total number of words in the documents.",
"Let d i and z i be the document that includes the i-th word w i and the hidden topic that is assigned to w i , respectively.",
"Let T be the number of topics.",
"As in LDA, we assume a probabilistic language model that generates a corpus as a mixture of hidden topics and infer two parameters: a documenttopic probability θ that represents a mixture rate of topics in each document, and a topic-word probability ϕ that represents an occurrence rate of words in each topic.",
"The model is defined as θ d i ∼ Dirichlet(α), z i |θ d i ∼ Multinomial(θ d i ), q ∼ DirichletForest(β, η), ϕ z i ∼ DirichletTree(q), w i |z i , ϕ z i ∼ Multinomial(ϕ z i ), where α and (β, η) are hyper parameters for θ and ϕ, respectively.",
"The only difference between LDA and LDA-DF is that ϕ is chosen not from the Dirichlet distribution, but from the Dirichlet tree distribution (Dennis III, 1991) , which is a generalization of the Dirichlet distribution.",
"The Dirichlet forest distribution assigns one tree to each topic from a set of Dirichlet trees, into which we encode domain knowledge.",
"The trees assigned to topics z are denoted as q.",
"In the framework, ML (A, B) is achieved by the Dirichlet tree in Fig.",
"1(a) , which equalizes the occurrence probabilities of A and B in a topic when η is large.",
"This tree generates probabilities with Dirichlet(2β, β) and redistributes the probability for \"2β\" with Dirichlet(ηβ, ηβ).",
"In the case of CLs, we use the following algorithm.",
"For examples, the algorithm creates the two trees in Fig.",
"1 (b) for the constraint CL(A, B) ∧ CL(A, C).",
"The constraint is achieved when η is large, since words in each topic are chosen from the distribution of either the left tree that zeros the occurrence probability of A, or the right tree that zeros those of B and C. Inference of ϕ and θ is achieved by alternately sampling topic z i for each word w i and Dirichlet tree q z for each topic z.",
"Since the Dirichlet tree distribution is conjugate to the multinomial distribution, the sampling equation of z i is easily derived like LDA as follows: p(z i = z | z −i , q, w) ∝ (n (d i ) −i,z + α) Iz(↑i) ∏ s γ (Cz(s↓i)) z + n (Cz(s↓i)) −i ∑ Cz(s) k ( γ (k) z + n (k) −i,z ) , where n (d) −i,z represents the number of words (ex- cluding w i ) assigning topic z in document d. n (k) −i,z represents the number of words (excluding w i ) assigning topic z in the subtree rooted at node k in tree q z .",
"I z (↑ i) and C z (s ↓ i) represents the set of internal nodes and the immediate child of node s, respectively, on the path from the root to leaf w i in tree q z .",
"C z (s) represents the set of children of node s in tree q z .",
"γ (k) z represents a weight of the edge to node k in tree q z .",
"Additionally, we define ∑ S s := ∑ s∈S .",
"Sampling of tree q z is achieved by sequentially sampling subtree q (r) z corresponding to the r-th connected component by using the following equation: p(q (r) z = q ′ | z, q −z , q (−r) z , w) ∝ |M r,q ′ |× I (q ′ ) z,r ∏ s Γ ( ∑ Cz(s) k γ (k) z ) ∏ Cz(s) k Γ ( γ (k) z + n (k) z ) Γ ( ∑ Cz(s) k (γ (k) z + n (k) z ) ) ∏ Cz(s) k Γ ( γ (k) z ) , where I (q ′ ) z,r represents the set of internal nodes in the subtree q ′ corresponding to the r-th connected component for tree q z .",
"|M r,q ′ | represents the size of the maximal independent set corresponding to the subtree q ′ for r-th connected component.",
"After sufficiently sampling z i and q z , we can infer posterior probabilitiesφ andθ using the last sampled z and q, in a similar manner to the standard LDA as follows.",
"θ (d) z = n (d) z + α ∑ T z ′ =1 ( n (d) z ′ + α ) ϕ (w) z = Iz(↑w) ∏ s γ (Cz(s↓w)) z + n (Cz(s↓w)) z ∑ Cz(s) k ( γ (k) z + n (k) z ) Logical Constraints on Words In this section, we address logical expressions of two links using disjunctions (∨) and negations (¬), as well as conjunctions (∧), e.g., ¬ML(A, B) ∨ ML(A, C).",
"We denote it as (∧,∨,¬)-expressions.",
"Since each negation can be removed in a preprocessing stage, we focus only on (∧,∨)-expressions.",
"Interpretation of negations is discussed in Sec.",
"3.4.",
"(∧,∨)-expressions of Links We propose a simple method that simultaneously achieves conjunctions and disjunctions of links, where the existing method can only treat conjunctions of links.",
"The key observation is that any Dirichlet trees constructed by MLs and CLs are essentially based only on two primitives.",
"One is Ep(A, B) that equalizes the occurrence probabilities of A and B in a topic as in Fig.",
"1(a) , and the other is Np(A) that zeros the occurrence probability of A in a topic as in the left tree of Fig.",
"1(b) .",
"The right tree of Fig.",
"1(b) is created by Np(B) ∧ Np(C).",
"Thus, we can substitute ML and CL with Ep and Np as follows: ML(A, B) = Ep(A, B) CL(A, B) = Np(A) ∨ Np(B) Using this substitution, we can compile a (∧, ∨)expression of links to the corresponding Dirichlet trees with the following algorithm.",
"1.",
"Substitute all links (ML and CL) with the corresponding primitives (Ep and Np).",
"2.",
"Calculate the minimum DNF of the primitives.",
"3.",
"Construct Dirichlet trees corresponding to the (monotone) monomials of the DNF.",
"Let us consider three words A = 'kung-fu', B = 'jackie', and C = 'bruce' in Sec.",
"1.",
"We want to constrain them with (ML(A, B) ∨ ML(A, C)) ∧ CL (B, C) .",
"In this case, the algorithm calculates the minimum DNF of primitives as (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ N p(C)) = (Ep(A, B) ∧ Np(B)) ∨ (Ep(A, B) ∧ Np(C)) ∨ (Ep(A, C) ∧ Np(B)) ∨ (Ep(A, C) ∧ Np(C)) and constructs four Dirichlet trees corresponding to the four monomials Ep(A, B) ∧ Np(B), Ep(A, B) ∧ Np(C), Ep(A, C) ∧ Np(B), and Ep(A, C) ∧ Np(C) in the last equation.",
"Considering only (∧)-expressions of links, our method is equivalent to the existing method in the original framework in terms of an asymptotic behavior of Dirichlet trees.",
"We define asymptotic behavior as Asymptotic Topic Family (ATF) as follows.",
"Definition 1 (Asymptotic Topic Family).",
"For any (∧, ∨)-expression f of primitives and any set W of words, we define the asymptotic topic family of f with respect to W as a family f * calculated by the following rules: Given (∧, ∨)-expressions f 1 and f 2 of primitives and words A, B ∈ W, (i) (f 1 ∨ f 2 ) * := f * 1 ∪ f * 2 (ii) (f 1 ∧ f 2 ) * := f * 1 ∩ f * 2 (iii) Ep * (A, B) := {∅, {A, B}} ⊗ 2 W−{A,B} , (iv) Np * (A) := 2 W−{A} Here, notation ⊗ is defined as X ⊗ Y := {x ∪ y | x ∈ X, y ∈ Y } for given two sets X and Y .",
"ATF expresses all combinations of words that can occur in a topic when η is large.",
"In the above example, the ATF of its expression with respect to W = {A, B, C} is calculated as ((ML(A, B) ∨ ML(A, C)) ∧ CL(B, C)) * = (Ep(A, B) ∨ Ep(A, C)) ∧ (Np(B) ∨ Np(C)) * = ( {∅, {A, B}} ⊗ 2 W−{A,B} ∪{∅, {A, C}} ⊗ 2 W−{A,C} ) ∩ ( 2 W−{B} ∪ 2 W−{C} ) = {∅, {B}, {C}, {A, B}, {A, C}}.",
"As we expected, the ATF of the last equation indicates such a constraint that either A and B or A and C must appear in the same topic, and B and C cannot appear in the same topic.",
"Note that the part of {B} satisfies ML(A, C) ∧ CL(B, C).",
"If you want to remove {B} and {C}, you can use exclusive disjunctions.",
"For the sake of simplicity, we omit descriptions about W when its instance is arbitrary or obvious from now on.",
"The next theorem gives the guarantee of asymptotic equivalency between our method and the existing method.",
"Let MIS(G) be the set of maximal independent sets of graph G. We define (x) ) is equivalent to the union of the power sets of every max- L := {{w, w ′ } | w, w ′ ∈ W, w ̸ = w ′ }.",
"imal independent set S ∈ MIS(G) of a graph G := (W, ℓ), that is, ∪ X∈X (∩ x∈X Np * (x) ) = ∪ S∈MIS(G) 2 S .",
"Proof.",
"For any (∧)-expressions of links characterized by ℓ ⊆ L, we denote f ℓ and G ℓ as the corresponding minimum DNF and graph, respectively.",
"We define U ℓ := ∪ S∈MIS(G ℓ ) 2 S .",
"When |ℓ| = 1, f * ℓ = U ℓ is trivial.",
"Assuming f * ℓ = U ℓ when |ℓ| > 1, for any set ℓ ′ := ℓ ∪ {{A, B}} with an additional link characterized by {A, B} ∈ L, we obtain f * ℓ ′ = ((Np(A) ∨ Np(B)) ∧ f ℓ ) * = (2 W−{A} ∪ 2 W−{B} ) ∩ U ℓ = ∪ S∈MIS(G ℓ ) ( (2 W−{A} ∩ 2 S ) ∪(2 W−{B} ∩ 2 S ) ) = ∪ S∈MIS(G ℓ ) (2 S−{A} ∪ 2 S−{B} ) = ∪ S∈MIS(G ℓ ′ ) 2 S = U ℓ ′ This proves the theorem by induction.",
"In the last line of the above deformation, we used ∪ S∈MIS(G) 2 S = ∪ S∈IS(G) 2 S and MIS(G ℓ ′ ) ⊆ ∪ S∈MIS(G ℓ ) ((S − {A}) ∪ (S − {B})) ⊆ IS(G ℓ ′ ), where IS(G) represents the set of all independent sets on graph G. In the above theorem, ∪ X∈X (∩ x∈X Np * (x) ) represents asymptotic behaviors of our method, while ∪ S∈MIS(G) 2 S represents those of the existing method.",
"By using a similar argument to the proof, we can prove the elements of the two sets are completely the same, i.e., ∩ x∈X Np * (x) = {2 S | S ∈ MIS(G)}.",
"This interestingly means that for any logical expression characterized by CLs, calculating its minimum DNF is the same as calculating the maximal independent sets of the corresponding graph, or the maximal cliques of its complement graph.",
"Shrinking Dirichlet Forests Focusing on asymptotic behaviors, we can reduce the number of Dirichlet trees, which means the performance improvement of Gibbs sampling for Dirichlet trees.",
"This is achieved just by minimizing DNF on asymptotic equivalence relation defined as follows.",
"Definition 3 (Asymptotic Equivalence Relation).",
"Given two (∧, ∨)-expressions f 1 , f 2 , we say that f 1 is asymptotically equivalent to f 2 , if and only if f * 1 = f * 2 .",
"We denote the relation as notation ≍, that is, f 1 ≍ f 2 ⇔ f * 1 = f * 2 .",
"The next proposition gives an intuitive understanding of why asymptotic equivalence relation can shrink Dirichlet forests.",
"Proposition 4.",
"For any two words A, B ∈ W, (a) Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) (b) Ep(A, B) ∧ Np(A) ≍ Np(A) ∧ Np(B) Proof.",
"We prove (a) only.",
"We conduct an experiment to clarify how many trees can be reduced by asymptotic equivalency.",
"In the experiment, we prepare conjunctions of random links of MLs and CLs when |W| = 10, and compare the average numbers of Dirichlet trees compiled by minimum DNF (M-DNF) and asymptotic minimum DNF (AM-DNF) in 100 trials.",
"The experimental result shown in Tab.",
"1 indicates that asymptotic equivalency effectively reduces the number of Dirichlet trees especially when the number of links is large.",
"Customizing New Links Two primitives Ep and Np allow us to easily customize new links without changing the algorithm.",
"Let us consider Imply-Link (A, B) or IL(A, B) , which is a constraint that B must appear if A appears in a topic (informally, A → B).",
"In this case, the setting IL(A, B) = Ep(A, B) ∨ Np(A) is acceptable, since the ATF of IL(A, B) IL(A, B) is effective when B has multiple meanings as mentioned later in Sec.",
"4. with respect to W = {A, B} is {∅, {A, B}, {B}}.",
"Informally regarding IL(A, B) as A → B and ML(A, B) as A ⇔ B, ML(A, B) seems to be the same meaning of IL(A, B) ∧ IL(B, A) .",
"However, this anticipation is wrong on the normal equivalency, i.e., ML(A, B) ̸ = IL(A, B) ∧ IL(B, A) .",
"The asymptotic equivalency can fulfill the anticipation with the next proposition.",
"This simultaneously suggests that our definition is semantically valid.",
"IL(B, A) ≍ ML(A, B) Proof.",
"From Proposition 4, Ep(A, B) = ML(A, B) Further, we can construct XIL(X 1 , · · · , X n , Y ) as an extended version of IL (A, B) , which allows us to use multiple conditions like Horn clauses.",
"This informally means ∧ n i=1 X i → Y as an extension of A → B.",
"In this case, we set Proposition 5.",
"For any two words A, B ∈ W, IL(A, B) ∧ IL(A, B) ∧ IL(B, A) = (Ep(A, B) ∨ Np(A)) ∧ (Ep(B, A) ∨ Np(B)) = Ep(A, B) ∨ (Ep(A, B) ∧ Np(A)) ∨ (Ep(A, B) ∧ Np(B)) ∨ (Np(A) ∧ Np(B)) ≍ Ep(A, B) ∨ (Np(A) ∧ Np(B)) ≍ XIL(X 1 , · · · , X n , Y ) = n ∧ i=1 Ep(X i , Y )∨ n ∨ i=1 Np(X i ).",
"When we want to isolate unnecessary words (i.e., stop words), we can use Isolate-Link (ISL) defined as ISL(X 1 , · · · , X n ) = n ∧ i=1 Np(X i ).",
"This is easier than considering CLs between highfrequency words and unnecessary words as described in ).",
"Negation of Links There are two types of interpretation for negation of links.",
"One is strong negation, which regards ¬ML (A, B) as \"A and B must not appear in the same topic\", and the other is weak negation, which regards it as \"A and B need not appear in the same topic\".",
"We set ¬ML(A, B) ≍ CL(A, B) for strong negation, while we just remove ¬ML(A, B) for weak negation.",
"We consider the strong negation in this study.",
"According to Def.",
"1, the ATF of the negation ¬f of primitive f seems to be defined as (¬f ) * := 2 W − f * .",
"However, this definition is not fit in strong negation, since ¬ML(A, B) ̸ ≍ CL(A, B) on the definition.",
"Thus we define it to be fit in strong negation as follows.",
"Definition 6 (ATF of strong negation of links).",
"Given a link L with arguments X 1 , · · · , X n , letting f L be the primitives of L, we define the ATF of the negation of L as (¬L(X 1 , · · · , X n )) * := (2 W − f * L (X 1 , · · · , X n )) ∪ 2 W−{X 1 ,··· ,Xn} .",
"Note that the definition is used not for primitives but for links.",
"Actually, the similar definition for primitives is not fit in strong negation, and so we must remove all negations in a preprocessing stage.",
"The next proposition gives the way to remove the negation of each link treated in this study.",
"We define no constraint condition as ϵ for the result of ISL.",
"Proposition 7.",
"For any words A, B, X 1 , · · · , X n , Y ∈ W, (a) ¬ML(A, B) ≍ CL(A, B) (b) ¬CL(A, B) ≍ ML(A, B) (c) ¬IL(A, B) ≍ Np(B) (d) ¬XIL(X 1 , · · · , X n , Y ) ≍ ∧ n−1 i=1 Ep(X i , X n ) ∧ Np(Y ) (e) ¬ISL(X 1 , · · · , X n ) ≍ ϵ Proof.",
"We prove (a) only.",
"(¬ML (A, B) ) * = (2 W − Ep * (A, B) (CL(A, B) ) * ) ∪ 2 W−{A,B} = (2 {A,B} − {∅, {A, B}}) ⊗ 2 W−{A,B} ∪ 2 W−{A,B} = {∅, {A}, {B}} ⊗ 2 W−{A,B} = 2 W−{A} ∪ 2 W−{B} = Np * (A) ∪ Np * (B) = Comparison on a Synthetic Corpus We experiment using a synthetic corpus {ABAB, ACAC} × 2 with vocabulary W = {A, B, C} to clarify the property of our method in the same way as in the existing work .",
"We set topic size as T = 2.",
"The goal of this experiment is to obtain two topics: a topic where A and B frequently occur and a topic where A and C frequently occur.",
"We abbreviate the grouping type as AB|AC.",
"In preliminary experiments, LDA yielded almost four grouping types: AB|AC, AB|C, AC|B, and A|BC.",
"Thus, we naively classify a grouping type of each result into the four types.",
"Concretely speaking, for any two topic-word probabilitiesφ andφ ′ , we calculate the average of Euclidian distances between each vector component ofφ and the corresponding one ofφ ′ , ignoring the difference of topic labels, and regard them as the same type if the average is less than 0.1.",
"Fig.",
"2 shows the occurrence rates of grouping types on 1,000 results after 1,000 iterations by LDA-DF with six constraints (1) no constraint, better.",
"The results of (1-4) can be achieved even by the existing method, and those of (5-6) can be achieved only by our method.",
"Roughly speaking, the figure shows that our method is clearly better than the existing method, since our method can obtain almost 100% as the rate of AB|AC, which is the best of all results, while the existing methods can only obtain about 60%, which is the best of the results of (1-4).",
"The result of (1) is the same result as LDA, because of no constraints.",
"In the result, the rate of AB|AC is only about 50%, since each of AB|C, AC|B, and A|BC remains at a high 15%.",
"As we expected, the result of (2) shows that ML(A, B) cannot remove AB|C although it can remove AC|B and A|BC, while the result of (3) shows that CL(B, C) cannot remove AB|C and AC|B although it can remove A|BC.",
"The result of (4) indicates that ML(A, B) ∧ CL(B, C) is the best of knowledge expressions in the existing method.",
"Note that ML(A, B) ∧ ML(A, C) implies ML(B, C) by transitive law and is inconsistent with all of the four types.",
"The result (80%) of (5) IL (B, A) is interestingly better than that (60%) of (4), despite that (5) has less primitives than (4).",
"The reason is that (5) allows A to appear with C, while (4) does not.",
"In the result of (6) ML (A, B)∨ML(A, C) , the constraint achieves almost 100%, which is the best of knowledge expressions in our method.",
"Of course, the constraint of (ML(A, B) ∨ ML(A, C)) ∧ CL(B, C) can also achieve almost 100%.",
"Interactive Topic Analysis We demonstrate advantages of our method via interactive topic analysis on a real corpus, which consists of stemmed, down-cased 1,000 (positive) movie reviews used in (Pang and Lee, 2004) .",
"In this experiment, the parameters are set as α = 1, β = 0.01, η = 1000, and T = 20.",
"We first ran LDA-DF with 1,000 iterations without any constraints and noticed that most topics have stop words (e.g., 'have' and 'not') and corpus-specific, unnecessary words (e.g., 'film', 'movie'), as in the first block in Tab.",
"2.",
"To remove them, we added ISL('film', 'movie', 'have', 'not', 'n't') to the constraint of LDA-DF, which is compiled to one Dirichlet tree.",
"After the second run of LDA-DF with the isolate-link, we specified most topics such as Comedy, Disney, and Family, since cumbersome words are isolated, and so we noticed that two topics about Star Wars and Star Trek are merged, as in the second block.",
"Each topic label is determined by looking carefully at highfrequency words in the topic.",
"To split the merged two topics, we added CL ('jedi', ' trek') to the constraint, which is compiled to two Dirichlet trees.",
"However, after the third run of LDA-DF, we noticed that there is no topic only about Star Trek, since 'star' appears only in the Star Wars topic, as in the third block.",
"Note that the topic including 'trek' had other topics such as a topic about comedy film Big Lebowski.",
"We finally added ML('star', 'jedi') ∨ ML ('star', ' trek') to the constraint, which is compiled to four Dirichlet trees, to split the two topics considering polysemy of 'star'.",
"After the fourth run of LDA-DF, we appropriately obtained two topics about Star Wars and Star Trek as in the fourth block.",
"Note that our solution is not ad-hoc, and we can easily apply it to similar problems.",
"Conclusions We proposed a simple method to achieve topic models with logical constraints on words.",
"Our method compiles a given constraint to the prior of LDA-DF, which is a recently developed semisupervised extension of LDA with Dirichlet forest priors.",
"As well as covering the constraints in the original LDA-DF, our method allows us to construct new customized constraints without changing the algorithm.",
"We proved that our method is asymptotically the same as the existing method for any constraints with conjunctive expressions, and showed that asymptotic equivalency can shrink a constructed Dirichlet forest.",
"In the comparative Table 2 : Characteristic topics obtained in the experiment on the real corpus.",
"Four blocks in the table corresponds to the results of the four constraints ϵ, ISL(· · · ), CL('jedi', 'trek') ∧ ISL(· · · ), and (ML('jedi', 'trek') ∨ ML('star', 'trek')) ∧ CL('jedi', 'trek') ∧ ISL(· · · ), respectively.",
"Topic High frequency words in each topic ?",
"have give night film turn performance ?",
"not life have own first only family tell ?",
"movie have n't get good not see ?",
"have black scene tom death die joe ?",
"film have n't not make out well see Isolated have film movie not good make n't ?",
"star war trek planet effect special Comedy comedy funny laugh school hilarious Disney disney voice mulan animated song Family life love family mother woman father Isolated have film movie not make good n't StarWars star war lucas effect jedi special ?",
"science world trek fiction lebowski Comedy funny comedy laugh get hilarious Disney disney truman voice toy show Family family father mother boy child son Isolated have film movie not make good n't StarWars star war toy jedi menace phantom StarTrek alien effect star science special trek Comedy comedy funny laugh hilarious joke Disney disney voice animated mulan Family life love family man story child study on a synthetic corpus, we clarified the property of our method, and in the interactive topic analysis on a movie review corpus, we demonstrated its effectiveness.",
"In the future, we intend to address detail comparative studies on real corpora and consider a simple method integrating negations into a whole, although we removed them in a preprocessing stage in this study."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"LDA with Dirichlet Forest Priors",
"Logical Constraints on Words",
"(∧,∨)-expressions of Links",
"Shrinking Dirichlet Forests",
"Customizing New Links",
"Negation of Links",
"Comparison on a Synthetic Corpus",
"Interactive Topic Analysis",
"Conclusions"
]
} | GEM-SciDuet-train-119#paper-1323#slide-18 | Appendix Visualization of Priors | ML = Must-Link, CL = Cannot-Link, IL = Imply-Link | ML = Must-Link, CL = Cannot-Link, IL = Imply-Link | [] |
GEM-SciDuet-train-120#paper-1330#slide-0 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-0 | Entity Typing | 1. Bill robbed John, and he was arrested shortly afterwards.
2. Nvidia hands out Titan V for free to AI researchers.
Information extraction [Ling 12, YY17]
Coreference resolution [Durrett 14]
Entity linking [Durrett 14, Raiman 18]
Question answering [Yavuz 16] | 1. Bill robbed John, and he was arrested shortly afterwards.
2. Nvidia hands out Titan V for free to AI researchers.
Information extraction [Ling 12, YY17]
Coreference resolution [Durrett 14]
Entity linking [Durrett 14, Raiman 18]
Question answering [Yavuz 16] | [] |
GEM-SciDuet-train-120#paper-1330#slide-1 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-1 | Scaling Up Entity Typing Mention Coverage | Bill robbed John, and he was arrested shortly afterwards.
2. Nvidia hands out Titan V for free to AI researchers.
Prior Work This Work
Titan V Titan V John John He
Reasoning over diverse, challenging mention strings | Bill robbed John, and he was arrested shortly afterwards.
2. Nvidia hands out Titan V for free to AI researchers.
Prior Work This Work
Titan V Titan V John John He
Reasoning over diverse, challenging mention strings | [] |
GEM-SciDuet-train-120#paper-1330#slide-2 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-2 | Scaling Up Entity Typing Type Coverage | Bill robbed John, and he was arrested shortly afterwards.
2. Nvidia hands out Titan V for free to AI researchers.
PER, Victim PER, Criminal PER, Criminal
ORG, Company OBJ, Product, Electronics PER, Researcher, Professional
Any frequent nouns from dictionary is allowed as a type | Bill robbed John, and he was arrested shortly afterwards.
2. Nvidia hands out Titan V for free to AI researchers.
PER, Victim PER, Criminal PER, Criminal
ORG, Company OBJ, Product, Electronics PER, Researcher, Professional
Any frequent nouns from dictionary is allowed as a type | [] |
GEM-SciDuet-train-120#paper-1330#slide-3 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-3 | Challenge 2 Very large label space | PER, criminal PER, victimPER, criminal
Bill robbed John. He was arrested shortly afterwards.
sp ud mig ric rga dic ncti nam orma uca marke demi band ufac data ounc mina me oup ale on co blin vin ecta ive p_a ate dic ent atio nk nce de per icu f ot thi em na pla fer drug staf eke for ric orm il_lpio gre ativ ex gr er ar lua ou cul uild olit ow star mic_ alu dus ha oy ec ere cili eit al_ opic serv inne lop dm wm vilia stig ab era en xt_ ie nn an b_s rai ve_ ria f ire ag
https://homes.cs.washington.edu/~eunsol/finetype_visualization/figer_index.html atis ilo pai ter nt cto ontr age pinio uipm sast policy point ball_p ministra risone ontra hoic gine ove nda po dca re ide dit ain book ateme rvicem _tra
ess ne otio ud ecti risi gme elativ uratio gisla name fenda tructur aract ecord aim vern war owe eat roo twa urn ma eat tdo ap tu lid ship spo fes nom part mmo pita ula spl unt pre sta
Nvidia hands out Titan V for free to AI researchers. fe al mis sum visio show ocume citizen enterprise actor meetin liat iva nc seb aso uis oxe rts an den bill maste professiona elebrit squad wspa elin erie aid ar_ https://homes.cs.washington.edu/~eunsol/finetype_visualization/onto_index.html ta _sc abit ian r_ve se ocie /country lec inist ilwa rren oble victim ecia state year licem mou ome crip mp alit roo /building ontestan performe team atherin land ist tne eac man home ave /company los up mst eig no ec mba ateri achi battle eporte musicia slat title king ndin tra /city /company uti goa politician apo em let ym umb crimina hang inessm etwo hiev um tem giti /event ea /time nec olfe onv tuatio case ou ctio adc oubl rrito business coach bstan esu tru gre cor
ORG, company an nce tain OBJ, product male leader sen ual c_g two raine action nsequen wo er ctio company resenta er co nn album song rtain uni athe PER, eal ou researcher, professional s_p ace ell /location ecut wife ac es rum pro ess sh player ran stm kpl ceho ruc hurc lica music ystem tfo tin time ma astro object mmunica agu ffic ncip tertain islat soci las rde ov pokesperso stitu child utho dre bra gw har erv writer adult /person /organization wgi oar cide car aceme cientis agen prize ho urre pas ca usa titu hea toc mpaig soldie city work pta ers singe tary apit erm activity region lam olu /other urg elo men eath udg re /person win em ders party lan dar_m space og semb nes zen erta ille rga urnal era girl government law us_l m_s on ssu int ws vem cuss blicati ndin idea person organization worker play spu adiu pa et _se oss vesto artist appeni emb tor mo alk bby stom mpetit ini | PER, criminal PER, victimPER, criminal
Bill robbed John. He was arrested shortly afterwards.
sp ud mig ric rga dic ncti nam orma uca marke demi band ufac data ounc mina me oup ale on co blin vin ecta ive p_a ate dic ent atio nk nce de per icu f ot thi em na pla fer drug staf eke for ric orm il_lpio gre ativ ex gr er ar lua ou cul uild olit ow star mic_ alu dus ha oy ec ere cili eit al_ opic serv inne lop dm wm vilia stig ab era en xt_ ie nn an b_s rai ve_ ria f ire ag
https://homes.cs.washington.edu/~eunsol/finetype_visualization/figer_index.html atis ilo pai ter nt cto ontr age pinio uipm sast policy point ball_p ministra risone ontra hoic gine ove nda po dca re ide dit ain book ateme rvicem _tra
ess ne otio ud ecti risi gme elativ uratio gisla name fenda tructur aract ecord aim vern war owe eat roo twa urn ma eat tdo ap tu lid ship spo fes nom part mmo pita ula spl unt pre sta
Nvidia hands out Titan V for free to AI researchers. fe al mis sum visio show ocume citizen enterprise actor meetin liat iva nc seb aso uis oxe rts an den bill maste professiona elebrit squad wspa elin erie aid ar_ https://homes.cs.washington.edu/~eunsol/finetype_visualization/onto_index.html ta _sc abit ian r_ve se ocie /country lec inist ilwa rren oble victim ecia state year licem mou ome crip mp alit roo /building ontestan performe team atherin land ist tne eac man home ave /company los up mst eig no ec mba ateri achi battle eporte musicia slat title king ndin tra /city /company uti goa politician apo em let ym umb crimina hang inessm etwo hiev um tem giti /event ea /time nec olfe onv tuatio case ou ctio adc oubl rrito business coach bstan esu tru gre cor
ORG, company an nce tain OBJ, product male leader sen ual c_g two raine action nsequen wo er ctio company resenta er co nn album song rtain uni athe PER, eal ou researcher, professional s_p ace ell /location ecut wife ac es rum pro ess sh player ran stm kpl ceho ruc hurc lica music ystem tfo tin time ma astro object mmunica agu ffic ncip tertain islat soci las rde ov pokesperso stitu child utho dre bra gw har erv writer adult /person /organization wgi oar cide car aceme cientis agen prize ho urre pas ca usa titu hea toc mpaig soldie city work pta ers singe tary apit erm activity region lam olu /other urg elo men eath udg re /person win em ders party lan dar_m space og semb nes zen erta ille rga urnal era girl government law us_l m_s on ssu int ws vem cuss blicati ndin idea person organization worker play spu adiu pa et _se oss vesto artist appeni emb tor mo alk bby stom mpetit ini | [] |
GEM-SciDuet-train-120#paper-1330#slide-4 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-4 | This Talk | Task: Ultra-Fine Covers all entity mentions
Allows all concepts as types
Crowdsourcing ultra fine-grained typing data
New source of distant supervision
Multitask loss for predicting ultra-fine types
Sets state-of-the-art results on existing benchmark
New Task: Ultra-Fine Entity Typing | Task: Ultra-Fine Covers all entity mentions
Allows all concepts as types
Crowdsourcing ultra fine-grained typing data
New source of distant supervision
Multitask loss for predicting ultra-fine types
Sets state-of-the-art results on existing benchmark
New Task: Ultra-Fine Entity Typing | [] |
GEM-SciDuet-train-120#paper-1330#slide-5 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-5 | Fine grained NER | He was elected over John McCain
coarse f ine grained Type Ontology grained
FIGER [Ling 12] OntoNotes [Gillick TypeNet [next talk] Person, Politician Ours
2 hierarchy level 3 hierarchy level hierarchy level No hierarchy | He was elected over John McCain
coarse f ine grained Type Ontology grained
FIGER [Ling 12] OntoNotes [Gillick TypeNet [next talk] Person, Politician Ours
2 hierarchy level 3 hierarchy level hierarchy level No hierarchy | [] |
GEM-SciDuet-train-120#paper-1330#slide-6 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-6 | Label Coverage Problem | He was elected over [ John McCain
In both, top 9 types covers over 80% of the
In OntoNotes, 52% of mentions was marked
Named NER Fine grained NER
Paris Agreement coarse Security Mortgages Oil f ine
grained Type Ontology grained | He was elected over [ John McCain
In both, top 9 types covers over 80% of the
In OntoNotes, 52% of mentions was marked
Named NER Fine grained NER
Paris Agreement coarse Security Mortgages Oil f ine
grained Type Ontology grained | [] |
GEM-SciDuet-train-120#paper-1330#slide-7 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-7 | Label Distribution In Evaluation Data | /organization https://homes.cs.washington.edu/~eunsol/finetype_visualization/figer_index.html /legal /country /country /building /company
/time /event OntoNotes [Gillick 14]
/art /location itten_wo ational_ins nment_a
/city /company sp ud mig ric rga dic ncti nam orma uca marke demi band ufac data ounc mina me oup ale on co mm blin vin ecta p_a
ate dic ent atio nk nce de per icu f ot thi em na pla fer drug staf eke for ric orm il_lpio gre ativ ex gr er ar lua ou cili cul eit uild al_ olit serv ow star inne mic_ alu dus dm vilia ha oy ec ab ere opic lop wm stig ive era en xt_ ie nn an b_s rai ve_ ria ire f ag
ess ne otio ud ecti risi gme elativ uratio gisla name fenda tructur aract ecord aim vern war owe eat roo twa urn ma eat tdo ap tu lid ship spo fes nom part mmo pita ula spl unt pre sta | /organization https://homes.cs.washington.edu/~eunsol/finetype_visualization/figer_index.html /legal /country /country /building /company
/time /event OntoNotes [Gillick 14]
/art /location itten_wo ational_ins nment_a
/city /company sp ud mig ric rga dic ncti nam orma uca marke demi band ufac data ounc mina me oup ale on co mm blin vin ecta p_a
ate dic ent atio nk nce de per icu f ot thi em na pla fer drug staf eke for ric orm il_lpio gre ativ ex gr er ar lua ou cili cul eit uild al_ olit serv ow star inne mic_ alu dus dm vilia ha oy ec ab ere opic lop wm stig ive era en xt_ ie nn an b_s rai ve_ ria ire f ag
ess ne otio ud ecti risi gme elativ uratio gisla name fenda tructur aract ecord aim vern war owe eat roo twa urn ma eat tdo ap tu lid ship spo fes nom part mmo pita ula spl unt pre sta | [] |
GEM-SciDuet-train-120#paper-1330#slide-8 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-8 | Automatic Mention Detection | Maximal noun phrases from the constituency parser (Manning et al 14)
Mentions from the co-reference resolution system (Lee et al 17)
In 1817, in collaboration with David Hare, he set up the Hindu College. | Maximal noun phrases from the constituency parser (Manning et al 14)
Mentions from the co-reference resolution system (Lee et al 17)
In 1817, in collaboration with David Hare, he set up the Hindu College. | [] |
GEM-SciDuet-train-120#paper-1330#slide-9 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-9 | Crowdsourcing Type Labels | Michael Buble putting career on hold after sons cancer diagnosis Person Parent Professional
Label space: 10K most common nouns from Wiktionary
Five crowd workers provide labels per each example
Collected 6K examples, 5.2 labels per example.
On average, 1 general type, 4 fine types | Michael Buble putting career on hold after sons cancer diagnosis Person Parent Professional
Label space: 10K most common nouns from Wiktionary
Five crowd workers provide labels per each example
Collected 6K examples, 5.2 labels per example.
On average, 1 general type, 4 fine types | [] |
GEM-SciDuet-train-120#paper-1330#slide-10 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-10 | Diverse Fine grained Types | town, company, space, mountain, work, murderer, journalist, army, outcome, politician, duty, document, women, employment, community, ballot, stage, host, son, friend, investigator, inflation, film, injection, album, music_group, food, milestone, chancellor, village, philosopher, military, medicine, river, health, incident, male, actor, citizenship,
language, prisoner, exhibition, cricketer, attack, singer, battle, religious_leader,
economy, vice_president, man, benefit, agency, deity, painting, bread, effect, university, power, direction, competition, civilian, reviewer, worker, member, cinema, talk, thinker, contract, landmark, fashion_designer, citizen, investor, territory, train, moss, concert, team, troglodyte, consequence, staff, subject, professor, use, tournament, planet, city, coach, date, curator, poet, rule, goddess, symptom, senator, month, weapon, parent, crime, hiding, general, position, political, religion, cell, business, designation,
computer_game, promotion, disaster, historian, poll, institution, transportation,
painter, free, official, traveller, year, player, beverage, performer, biographer, priest, wind, cash, race, guest, area, agreement, prison, analyst, draw, love, police, actress
economy, vice_president, man, benefit, agency, deity, painting, bread, effect, university, 2,300 unique types for 6K xamples power, direction, competition, civilian, reviewer, worker, member, cinema, talk, thinker, contract, landmark, fashion_designer, citizen, investor, territory, train, moss, concert, To cover 80% of labels, 429 types a e needed team, troglodyte, consequence, staff, subject, professor, use, tournament, planet, city, coach, date, curator, poet, rule, goddess, symptom, senator, month, weapon, parent, crime, hiding, general, position, political, religion, cell, business, designation, | town, company, space, mountain, work, murderer, journalist, army, outcome, politician, duty, document, women, employment, community, ballot, stage, host, son, friend, investigator, inflation, film, injection, album, music_group, food, milestone, chancellor, village, philosopher, military, medicine, river, health, incident, male, actor, citizenship,
language, prisoner, exhibition, cricketer, attack, singer, battle, religious_leader,
economy, vice_president, man, benefit, agency, deity, painting, bread, effect, university, power, direction, competition, civilian, reviewer, worker, member, cinema, talk, thinker, contract, landmark, fashion_designer, citizen, investor, territory, train, moss, concert, team, troglodyte, consequence, staff, subject, professor, use, tournament, planet, city, coach, date, curator, poet, rule, goddess, symptom, senator, month, weapon, parent, crime, hiding, general, position, political, religion, cell, business, designation,
computer_game, promotion, disaster, historian, poll, institution, transportation,
painter, free, official, traveller, year, player, beverage, performer, biographer, priest, wind, cash, race, guest, area, agreement, prison, analyst, draw, love, police, actress
economy, vice_president, man, benefit, agency, deity, painting, bread, effect, university, 2,300 unique types for 6K xamples power, direction, competition, civilian, reviewer, worker, member, cinema, talk, thinker, contract, landmark, fashion_designer, citizen, investor, territory, train, moss, concert, To cover 80% of labels, 429 types a e needed team, troglodyte, consequence, staff, subject, professor, use, tournament, planet, city, coach, date, curator, poet, rule, goddess, symptom, senator, month, weapon, parent, crime, hiding, general, position, political, religion, cell, business, designation, | [] |
GEM-SciDuet-train-120#paper-1330#slide-11 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-11 | Data Validation | town, company, space, mountain, work, murderer, journalist, army, outcome, politician, duty, document, women, employment, community, ballot, stage, host, son, friend, investigator, inflation, film, injection, album, music_group, food, milestone, chancellor, village, philosopher, military, medicine, river, health, incident, male, actor, citizenship,
language, prisoner, exhibition, cricketer, attack, singer, battle, religious_leader, 86% binary agreement
economy, vice_president, man, benefit, agency, deity, painting, bread, effect, university, power, direction, competition, civilian, reviewer, worker, member, cinema, talk, thinker, Only collects labels that majority of validators contract, landmark, fashion_designer, citizen, investor, territory, train, moss, concert, team, troglodyte, consequence, staff, subject, professor, use, tournament, planet, city, (3/5) agreed coach, date, curator, poet, rule, goddess, symptom, senator, month, weapon, parent, crime, hiding, general, position, political, religion, cell, business, designation,
computer_game, promotion, disaster, historian, poll, institution, transportation,
painter, free, official, traveller, year, player, beverage, performer, biographer, priest, wind, cash, race, guest, area, agreement, prison, analyst, draw, love, police, actress | town, company, space, mountain, work, murderer, journalist, army, outcome, politician, duty, document, women, employment, community, ballot, stage, host, son, friend, investigator, inflation, film, injection, album, music_group, food, milestone, chancellor, village, philosopher, military, medicine, river, health, incident, male, actor, citizenship,
language, prisoner, exhibition, cricketer, attack, singer, battle, religious_leader, 86% binary agreement
economy, vice_president, man, benefit, agency, deity, painting, bread, effect, university, power, direction, competition, civilian, reviewer, worker, member, cinema, talk, thinker, Only collects labels that majority of validators contract, landmark, fashion_designer, citizen, investor, territory, train, moss, concert, team, troglodyte, consequence, staff, subject, professor, use, tournament, planet, city, (3/5) agreed coach, date, curator, poet, rule, goddess, symptom, senator, month, weapon, parent, crime, hiding, general, position, political, religion, cell, business, designation,
computer_game, promotion, disaster, historian, poll, institution, transportation,
painter, free, official, traveller, year, player, beverage, performer, biographer, priest, wind, cash, race, guest, area, agreement, prison, analyst, draw, love, police, actress | [] |
GEM-SciDuet-train-120#paper-1330#slide-12 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-12 | 1 Knowledge Base Supervision | [ Arnold Schwarzenegger] gives a speech at Mission
Serves service project on Veterans Day 2010.
Entity linking person, politician, athlete,
businessman, artist, actor, author | [ Arnold Schwarzenegger] gives a speech at Mission
Serves service project on Veterans Day 2010.
Entity linking person, politician, athlete,
businessman, artist, actor, author | [] |
GEM-SciDuet-train-120#paper-1330#slide-13 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-13 | 2 Wikipedia Supervision | Arnold Alois Schwarzenegger is an Austrian-
American actor, producer, businessman, investor, author, philanthropist, activist, politician and former professional body-builder.
[ Arnold Schwarzenegger] gives a speech at Mission
Serves service project on Veterans Day 2010.
4.6K unique types on 3.1M entities
Mexican National Championship competition
Palestinian Interest Committee movement
Giovanni Paolo Lancelotti canonist | Arnold Alois Schwarzenegger is an Austrian-
American actor, producer, businessman, investor, author, philanthropist, activist, politician and former professional body-builder.
[ Arnold Schwarzenegger] gives a speech at Mission
Serves service project on Veterans Day 2010.
4.6K unique types on 3.1M entities
Mexican National Championship competition
Palestinian Interest Committee movement
Giovanni Paolo Lancelotti canonist | [] |
GEM-SciDuet-train-120#paper-1330#slide-14 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-14 | Supervision Summary | Entity linking General Fine X Good base
Entity linking, Wikipedia Finer X Better Parser
Entity tit linking li i General Fine X Good
Entity linking, Entity linking,
Fine-grained Finer X Better Better
Dependency Headword Finest O Best Parser | Entity linking General Fine X Good base
Entity linking, Wikipedia Finer X Better Parser
Entity tit linking li i General Fine X Good
Entity linking, Entity linking,
Fine-grained Finer X Better Better
Dependency Headword Finest O Best Parser | [] |
GEM-SciDuet-train-120#paper-1330#slide-15 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-15 | 3 Head Word Supervision | [Controversial judge James Pickles] sentences Tracey
Scott to six months in prison after she admitted helping shoplifter.
Using a head word from original noun phrase as a source of supervision.
[Consent forms , Institutional Review Boards,] peer review committees and data safety committees did not exist decades ago.
In [addition] there's an USB 1.1 port that can be used to attach to a printer. | [Controversial judge James Pickles] sentences Tracey
Scott to six months in prison after she admitted helping shoplifter.
Using a head word from original noun phrase as a source of supervision.
[Consent forms , Institutional Review Boards,] peer review committees and data safety committees did not exist decades ago.
In [addition] there's an USB 1.1 port that can be used to attach to a printer. | [] |
GEM-SciDuet-train-120#paper-1330#slide-16 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-16 | Supervision Summary II | Source Cover Accuracy* Scale
KB Named Entities M
Wikipedia Named Entities M
* Manual examination on examples | Source Cover Accuracy* Scale
KB Named Entities M
Wikipedia Named Entities M
* Manual examination on examples | [] |
GEM-SciDuet-train-120#paper-1330#slide-17 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-17 | Bidirectional RNN Model | Closely follow previous model for ine-grained f NER [Shimaoka 17]
Single LSTM to cover left, right context and mention | Closely follow previous model for ine-grained f NER [Shimaoka 17]
Single LSTM to cover left, right context and mention | [] |
GEM-SciDuet-train-120#paper-1330#slide-18 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-18 | Multitask Objective | Binary classification log likelihood objective for each label
Sum loss at different type granularities | Binary classification log likelihood objective for each label
Sum loss at different type granularities | [] |
GEM-SciDuet-train-120#paper-1330#slide-19 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-19 | Experiments | Ultra-Fine Entity Typing Dataset
OntoNotes Fine-Grained Typing Dataset (Gillick et al 14)
Macro-averaged Precision, Recall, F1 | Ultra-Fine Entity Typing Dataset
OntoNotes Fine-Grained Typing Dataset (Gillick et al 14)
Macro-averaged Precision, Recall, F1 | [] |
GEM-SciDuet-train-120#paper-1330#slide-20 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-20 | Data Setup | Ultra-Fine Entity Typing Benchmark OntoNotes Dataset (Gillick et al 14)
2K crowdsourced 2.69M KB supervision
Train 20M Headword 2.1M Headword supervision
5M Entity Linking 0.6M Wikipedia supervision
Test 2K crowdsourced 8K crowdsourced | Ultra-Fine Entity Typing Benchmark OntoNotes Dataset (Gillick et al 14)
2K crowdsourced 2.69M KB supervision
Train 20M Headword 2.1M Headword supervision
5M Entity Linking 0.6M Wikipedia supervision
Test 2K crowdsourced 8K crowdsourced | [] |
GEM-SciDuet-train-120#paper-1330#slide-21 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-21 | Comparison Systems | AttentiveNER Model [Shimaoka et al., 2017]
Ablation on the different sets of supervision | AttentiveNER Model [Shimaoka et al., 2017]
Ablation on the different sets of supervision | [] |
GEM-SciDuet-train-120#paper-1330#slide-22 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-22 | Ultra Fine Entity Typing | Multitask loss encourages prediction on fine-grained labels, hurting precision but improves recall Mean Reciprocal Rank
Our model architecture (character-level CNN, single LSTM) improves the performance AttentiveNER | Multitask loss encourages prediction on fine-grained labels, hurting precision but improves recall Mean Reciprocal Rank
Our model architecture (character-level CNN, single LSTM) improves the performance AttentiveNER | [] |
GEM-SciDuet-train-120#paper-1330#slide-23 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-23 | Ablation Study | All -Entity Linking -Headword
General F1 Fine F1 Ultra-Fine F1
person, organization, event, object politician, artist, building, company friend, accident, talk, president
Finer types are harder to predict
Headword is more important for ultra-fine types, entity linking for f ine types. | All -Entity Linking -Headword
General F1 Fine F1 Ultra-Fine F1
person, organization, event, object politician, artist, building, company friend, accident, talk, president
Finer types are harder to predict
Headword is more important for ultra-fine types, entity linking for f ine types. | [] |
GEM-SciDuet-train-120#paper-1330#slide-24 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-24 | OntoNotes Fine grained Types | AttentiveNER AttentiveNER Our Data Ours + Our Data | AttentiveNER AttentiveNER Our Data Ours + Our Data | [] |
GEM-SciDuet-train-120#paper-1330#slide-25 | 1330 | Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict open types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178
],
"paper_content_text": [
"Introduction Entities can often be described by very fine grained types.",
"Consider the sentences \"Bill robbed John.",
"He was arrested.\"",
"The noun phrases \"John,\" \"Bill,\" and \"he\" have very specific types that can be inferred from the text.",
"This includes the facts that \"Bill\" and \"he\" are both likely \"criminal\" due to the \"robbing\" and \"arresting,\" while \"John\" is more likely a \"victim\" because he was \"robbed.\"",
"Such fine-grained types (victim, criminal) Table 1 : Examples of entity mentions and their annotated types, as annotated in our dataset.",
"The entity mentions are bold faced and in the curly brackets.",
"The bold blue types do not appear in existing fine-grained type ontologies.",
"as coreference resolution and question answering (e.g.",
"\"Who was the victim?\").",
"Inferring such types for each mention (John, he) is not possible given current typing models that only predict relatively coarse types and only consider named entities.",
"To address this challenge, we present a new task: given a sentence with a target entity mention, predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence.",
"Table 1 shows three examples that exhibit a rich variety of types at different granularities.",
"Our task effectively subsumes existing finegrained named entity typing formulations due to the use of a very large type vocabulary and the fact that we predict types for all noun phrases, including named entities, nominals, and pronouns.",
"Incorporating fine-grained entity types has improved entity-focused downstream tasks, such as relation extraction (Yaghoobzadeh et al., 2017a) , question answering (Yavuz et al., 2016) , query analysis (Balog and Neumayer, 2012) , and coreference resolution (Durrett and Klein, 2014) .",
"These systems used a relatively coarse type ontology.",
"However, manually designing the ontology is a challenging task, and it is difficult to cover all pos- Figure 1 : A visualization of all the labels that cover 90% of the data, where a bubble's size is proportional to the label's frequency.",
"Our dataset is much more diverse and fine grained when compared to existing datasets (OntoNotes and FIGER), in which the top 5 types cover 70-80% of the data.",
"sible concepts even within a limited domain.",
"This can be seen empirically in existing datasets, where the label distribution of fine-grained entity typing datasets is heavily skewed toward coarse-grained types.",
"For instance, annotators of the OntoNotes dataset marked about half of the mentions as \"other,\" because they could not find a suitable type in their ontology (see Figure 1 for a visualization and Section 2.2 for details).",
"Our more open, ultra-fine vocabulary, where types are free-form noun phrases, alleviates the need for hand-crafted ontologies, thereby greatly increasing overall type coverage.",
"To better understand entity types in an unrestricted setting, we crowdsource a new dataset of 6,000 examples.",
"Compared to previous fine-grained entity typing datasets, the label distribution in our data is substantially more diverse and fine-grained.",
"Annotators easily generate a wide range of types and can determine with 85% agreement if a type generated by another annotator is appropriate.",
"Our evaluation data has over 2,500 unique types, posing a challenging learning problem.",
"While our types are harder to predict, they also allow for a new form of contextual distant supervision.",
"We observe that text often contains cues that explicitly match a mention to its type, in the form of the mention's head word.",
"For example, \"the incumbent chairman of the African Union\" is a type of \"chairman.\"",
"This signal complements the supervision derived from linking entities to knowledge bases, which is context-oblivious.",
"For example, \"Clint Eastwood\" can be described with dozens of types, but context-sensitive typing would prefer \"director\" instead of \"mayor\" for the sentence \"Clint Eastwood won 'Best Director' for Million Dollar Baby.\"",
"We combine head-word supervision, which provides ultra-fine type labels, with traditional signals from entity linking.",
"Although the problem is more challenging at finer granularity, we find that mixing fine and coarse-grained supervision helps significantly, and that our proposed model with a multitask objective exceeds the performance of existing entity typing models.",
"Lastly, we show that head-word supervision can be used for previous formulations of entity typing, setting the new state-of-the-art performance on an existing finegrained NER benchmark.",
"Task and Data Given a sentence and an entity mention e within it, the task is to predict a set of natural-language phrases T that describe the type of e. The selection of T is context sensitive; for example, in \"Bill Gates has donated billions to eradicate malaria,\" Bill Gates should be typed as \"philanthropist\" and not \"inventor.\"",
"This distinction is important for context-sensitive tasks such as coreference resolution and question answering (e.g.",
"\"Which philanthropist is trying to prevent malaria?\").",
"We annotate a dataset of about 6,000 mentions via crowdsourcing (Section 2.1), and demonstrate that using an large type vocabulary substantially increases annotation coverage and diversity over existing approaches (Section 2.2).",
"Crowdsourcing Entity Types To capture multiple domains, we sample sentences from Gigaword (Parker et al., 2011) , OntoNotes (Hovy et al., 2006) , and web articles (Singh et al., 2012) .",
"We select entity mentions by taking maximal noun phrases from a constituency parser (Manning et al., 2014) and mentions from a coreference resolution system (Lee et al., 2017) .",
"We provide the sentence and the target entity mention to five crowd workers on Mechanical Turk, and ask them to annotate the entity's type.",
"To encourage annotators to generate fine-grained types, we require at least one general type (e.g.",
"person, organization, location) and two specific types (e.g.",
"doctor, fish, religious institute), from a type vocabulary of about 10K frequent noun phrases.",
"We use WordNet (Miller, 1995) to expand these types automatically by generating all their synonyms and hypernyms based on the most common sense, and ask five different annotators to validate the generated types.",
"Each pair of annotators agreed on 85% of the binary validation decisions (i.e.",
"whether a type is suitable or not) and 0.47 in Fleiss's κ.",
"To further improve consistency, the final type set contained only types selected by at least 3/5 annotators.",
"Further crowdsourcing details are available in the supplementary material.",
"Our collection process focuses on precision.",
"Thus, the final set is diverse but not comprehensive, making evaluation non-trivial (see Section 5).",
"Data Analysis We collected about 6,000 examples.",
"For analysis, we classified each type into three disjoint bins: • 9 general types: person, location, object, organization, place, entity, object, time, event • 121 fine-grained types, mapped to fine-grained entity labels from prior work ) (e.g.",
"film, athlete) • 10,201 ultra-fine types, encompassing every other label in the type space (e.g.",
"detective, lawsuit, temple, weapon, composer) On average, each example has 5 labels: 0.9 general, 0.6 fine-grained, and 3.9 ultra-fine types.",
"Among the 10,000 ultra-fine types, 2,300 unique types were actually found in the 6,000 crowdsourced examples.",
"Nevertheless, our distant supervision data (Section 3) provides positive training examples for every type in the entire vocabulary, and our model (Section 4) can and does predict from a 10K type vocabulary.",
"For example, Figure 2 : The label distribution across different evaluation datasets.",
"In existing datasets, the top 4 or 7 labels cover over 80% of the labels.",
"In ours, the top 50 labels cover less than 50% of the data.",
"the model correctly predicts \"television network\" and \"archipelago\" for some mentions, even though that type never appears in the 6,000 crowdsourced examples.",
"Improving Type Coverage We observe that prior fine-grained entity typing datasets are heavily focused on coarse-grained types.",
"To quantify our observation, we calculate the distribution of types in FIGER , OntoNotes , and our data.",
"For examples with multiple types (|T | > 1), we counted each type 1/|T | times.",
"Figure 2 shows the percentage of labels covered by the top N labels in each dataset.",
"In previous enitity typing datasets, the distribution of labels is highly skewed towards the top few labels.",
"To cover 80% of the examples, FIGER requires only the top 7 types, while OntoNotes needs only 4; our dataset requires 429 different types.",
"Figure 1 takes a deeper look by visualizing the types that cover 90% of the data, demonstrating the diversity of our dataset.",
"It is also striking that more than half of the examples in OntoNotes are classified as \"other,\" perhaps because of the limitation of its predefined ontology.",
"Improving Mention Coverage Existing datasets focus mostly on named entity mentions, with the exception of OntoNotes, which contained nominal expressions.",
"This has implications on the transferability of FIGER/OntoNotes-based models to tasks such as coreference resolution, which need to analyze all types of entity mentions (pronouns, nominal expressions, and named entity Distant Supervision Training data for fine-grained NER systems is typically obtained by linking entity mentions and drawing their types from knowledge bases (KBs).",
"This approach has two limitations: recall can suffer due to KB incompleteness (West et al., 2014) , and precision can suffer when the selected types do not fit the context (Ritter et al., 2011) .",
"We alleviate the recall problem by mining entity mentions that were linked to Wikipedia in HTML, and extract relevant types from their encyclopedic definitions (Section 3.1).",
"To address the precision issue (context-insensitive labeling), we propose a new source of distant supervision: automatically extracted nominal head words from raw text (Section 3.2).",
"Using head words as a form of distant supervision provides fine-grained information about named entities and nominal mentions.",
"While a KB may link \"the 44th president of the United States\" to many types such as author, lawyer, and professor, head words provide only the type \"president\", which is relevant in the context.",
"We experiment with the new distant supervision sources as well as the traditional KB supervision.",
"Table 2 shows examples and statistics for each source of supervision.",
"We annotate 100 examples from each source to estimate the noise and usefulness in each signal (precision in Table 2 ).",
"Entity Linking For KB supervision, we leveraged training data from prior work by manually mapping their ontology to our 10,000 noun type vocabulary, which covers 130 of our labels (general and fine-grained).",
"2 Section 6 defines this mapping in more detail.",
"To improve both entity and type coverage of KB supervision, we use definitions from Wikipedia.",
"We follow Shnarch et al.",
"() who observed that the first sentence of a Wikipedia article often states the entity's type via an \"is a\" relation; for example, \"Roger Federer is a Swiss professional tennis player.\"",
"Since we are using a large type vocabulary, we can now mine this typing information.",
"3 We extracted descriptions for 3.1M entities which contain 4,600 unique type labels such as \"competition,\" \"movement,\" and \"village.\"",
"We bypass the challenge of automatically linking entities to Wikipedia by exploiting existing hyperlinks in web pages (Singh et al., 2012) , following prior work Yosef et al., 2012) .",
"Since our heuristic extraction of types from the definition sentence is somewhat noisy, we use a more conservative entity linking policy 4 that yields a signal with similar overall accuracy to KB-linked data.",
"2 Data from: https://github.com/ shimaokasonse/NFGEC 3 We extract types by applying a dependency parser (Manning et al., 2014) to the definition sentence, and taking nouns that are dependents of a copular edge or connected to nouns linked to copulars via appositive or conjunctive edges.",
"4 Only link if the mention contains the Wikipedia entity's name and the entity's name contains the mention's head.",
"Contextualized Supervision Many nominal entity mentions include detailed type information within the mention itself.",
"For example, when describing Titan V as \"the newlyreleased graphics card\", the head words and phrases of this mention (\"graphics card\" and \"card\") provide a somewhat noisy, but very easy to gather, context-sensitive type signal.",
"We extract nominal head words with a dependency parser (Manning et al., 2014) from the Gigaword corpus as well as the Wikilink dataset.",
"To support multiword expressions, we included nouns that appear next to the head if they form a phrase in our type vocabulary.",
"Finally, we lowercase all words and convert plural to singular.",
"Our analysis reveals that this signal has a comparable accuracy to the types extracted from entity linking (around 80%).",
"Many errors are from the parser, and some errors stem from idioms and transparent heads (e.g.",
"\"parts of capital\" labeled as \"part\").",
"While the headword is given as an input to the model, with heavy regularization and multitasking with other supervision sources, this supervision helps encode the context.",
"Model We design a model for predicting sets of types given a mention in context.",
"The architecture resembles the recent neural AttentiveNER model (Shimaoka et al., 2017) , while improving the sentence and mention representations, and introducing a new multitask objective to handle multiple sources of supervision.",
"The hyperparameter settings are listed in the supplementary material.",
"Context Representation Given a sentence x 1 , .",
".",
".",
", x n , we represent each token x i using a pre-trained word embedding w i .",
"We concatenate an additional location embedding l i which indicates whether x i is before, inside, or after the mention.",
"We then use [x i ; l i ] as an input to a bidirectional LSTM, producing a contextualized representation h i for each token; this is different from the architecture of Shimaoka et al.",
"2017 , who used two separate bidirectional LSTMs on each side of the mention.",
"Finally, we represent the context c as a weighted sum of the contextualized token representations using MLP-based attention: a i = SoftMax i (v a · relu(W a h i )) Where W a and v a are the parameters of the attention mechanism's MLP, which allows interaction between the forward and backward directions of the LSTM before computing the weight factors.",
"Mention Representation We represent the mention m as the concatenation of two items: (a) a character-based representation produced by a CNN on the entire mention span, and (b) a weighted sum of the pre-trained word embeddings in the mention span computed by attention, similar to the mention representation in a recent coreference resolution model (Lee et al., 2017) .",
"The final representation is the concatenation of the context and mention representations: r = [c; m].",
"Label Prediction We learn a type label embedding matrix W t ∈ R n×d where n is the number of labels in the prediction space and d is the dimension of r. This matrix can be seen as a combination of three sub matrices, W general , W f ine , W ultra , each of which contains the representations of the general, fine, and ultra-fine types respectively.",
"We predict each type's probability via the sigmoid of its inner product with r: y = σ(W t r).",
"We predict every type t for which y t > 0.5, or arg max y t if there is no such type.",
"Multitask Objective The distant supervision sources provide partial supervision for ultra-fine types; KBs often provide more general types, while head words usually provide only ultra-fine types, without their generalizations.",
"In other words, the absence of a type at a different level of abstraction does not imply a negative signal; e.g.",
"when the head word is \"inventor\", the model should not be discouraged to predict \"person\".",
"Prior work used a customized hinge loss or max margin loss to improve robustness to noisy or incomplete supervision.",
"We propose a multitask objective that reflects the characteristic of our training dataset.",
"Instead of updating all labels for each example, we divide labels into three bins (general, fine, and ultra-fine), and update labels only in bin containing at least one positive label.",
"Specifically, the training objective is to minimize J where t is the target vector at each granularity: J all = J general · 1 general (t) + J fine · 1 fine (t) + J ultra · 1 ultra (t) Where 1 category (t) is an indicator function that checks if t contains a type in the category, and J category is the category-specific logistic regression objective: J = − i t i · log(y i ) + (1 − t i ) · log(1 − y i ) Evaluation Experiment Setup The crowdsourced dataset (Section 2.1) was randomly split into train, development, and test sets, each with about 2,000 examples.",
"We use this relatively small manuallyannotated training set (Crowd in Table 4 ) alongside the two distant supervision sources: entity linking (KB and Wikipedia definitions) and head words.",
"To combine supervision sources of different magnitudes (2K crowdsourced data, 4.7M entity linking data, and 20M head words), we sample a batch of equal size from each source at each iteration.",
"We reimplement the recent AttentiveNER model (Shimaoka et al., 2017) for reference.",
"5 We report macro-averaged precision, recall, and F1, and the average mean reciprocal rank (MRR).",
"Results Table 3 shows the performance of our model and our reimplementation of Atten-tiveNER.",
"Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.",
"The MRR score shows that our model is slightly better than the baseline at ranking correct types above incorrect ones.",
"Table 4 shows the performance breakdown for different type granularity and different supervision.",
"Overall, as seen in previous work on finegrained NER literature , finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealing with ultra-fine types.",
"All sources of supervision appear to be useful, with crowdsourced examples making the biggest impact.",
"Head word supervision is particularly helpful for predicting ultra-fine labels, while entity linking improves fine label prediction.",
"The low general type performance is partially because of nominal/pronoun mentions (e.g.",
"\"it\"), and because of the large type inventory (sometimes \"location\" and \"place\" are annotated interchangeably).",
"Analysis We manually analyzed 50 examples from the development set, four of which we present in Table 5 .",
"Overall, the model was able to generate accurate general types and a diverse set of type labels.",
"Despite our efforts to annotate a comprehensive type set, the gold labels still miss many potentially correct labels (example (a): \"man\" is reasonable but counted as incorrect).",
"This makes the precision estimates lower than the actual performance level, with about half the precision errors belonging to this category.",
"Real precision errors include predicting co-hyponyms (example (b): \"accident\" instead of \"attack\"), and types that may be true, but are not supported by the context.",
"We found that the model often abstained from predicting any fine-grained types.",
"Especially in challenging cases as in example (c), the model predicts only general types, explaining the low recall numbers (28% of examples belong to this category).",
"Even when the model generated correct fine-grained types as in example (d), the recall was often fairly low since it did not generate a complete set of related fine-grained labels.",
"Estimating the performance of a model in an incomplete label setting and expanding label coverage are interesting areas for future work.",
"Our task also poses a potential modeling challenge; sometimes, the model predicts two incongruous types (e.g.",
"\"location\" and \"person\"), which points towards modeling the task as a joint set prediction task, rather than predicting labels individually.",
"We provide sample outputs on the project website.",
"Improving Existing Fine-Grained NER with Better Distant Supervision We show that our model and distant supervision can improve performance on an existing finegrained NER task.",
"We chose the widely-used OntoNotes dataset which includes nominal and named entity mentions.",
"6 6 While we were inspired by FIGER , the dataset presents technical difficulties.",
"The test set has only 600 examples, and the development set was labeled with distant supervision, not manual annotation.",
"We therefore focus our evaluation on OntoNotes.",
"Augmenting the Training Data The original OntoNotes training set (ONTO in Tables 6 and 7) is extracted by linking entities to a KB.",
"We supplement this dataset with our two new sources of distant supervision: Wikipedia definition sentences (WIKI) and head word supervision (HEAD) (see Section 3).",
"To convert the label space, we manually map a single noun from our natural-language vocabulary to each formal-language type in the OntoNotes ontology.",
"77% of OntoNote's types directly correspond to suitable noun labels (e.g.",
"\"doctor\" to \"/person/doctor\"), whereas the other cases were mapped with minimal manual effort (e.g.",
"\"musician\" to \"person/artist/music\", \"politician\" to \"/person/political figure\").",
"We then expand these labels according to the ontology to include their hypernyms (\"/person/political figure\" will also generate \"/person\").",
"Lastly, we create negative examples by assigning the \"/other\" label to examples that are not mapped to the ontology.",
"The augmented dataset contains 2.5M/0.6M new positive/negative examples, of which 0.9M/0.1M are from Wikipedia definition sentences and 1.6M/0.5M from head words.",
"Experiment Setup We compare performance to other published results and to our reimplementation of AttentiveNER (Shimaoka et al., 2017) .",
"We also compare models trained with different sources of supervision.",
"For this dataset, we did not use our multitask objective (Section 4), since expanding types to include their ontological hypernyms largely eliminates the partial supervision as-Acc.",
"Ma-F1 Mi-F1 AttentiveNER++ 51.7 70.9 64.9 AFET 55.",
"Table 7 : Ablation study on the OntoNotes finegrained entity typing development.",
"The second row isolates dataset improvements, while the third row isolates the model.",
"sumption.",
"Following prior work, we report macroand micro-averaged F1 score, as well as accuracy (exact set match).",
"Results Table 6 shows the overall performance on the test set.",
"Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.",
"7 In Table 7 , we show an ablation study.",
"Our new supervision sources improve the performance of both the AttentiveNER model and our own.",
"We observe that every supervision source improves performance in its own right.",
"Particularly, the naturally-occurring head-word supervision seems to be the prime source of improvement, increasing performance by about 10% across all metrics.",
"Predicting Miscellaneous Types While analyzing the data, we observed that over half of the mentions in OntoNotes' development set were annotated only with the miscellaneous type (\"/other\").",
"For both models in our evaluation, detecting the miscellaneous category is substantially easier than Conclusion Using virtually unrestricted types allows us to expand the standard KB-based training methodology with typing information from Wikipedia definitions and naturally-occurring head-word supervision.",
"These new forms of distant supervision boost performance on our new dataset as well as on an existing fine-grained entity typing benchmark.",
"These results set the first performance levels for our evaluation dataset, and suggest that the data will support significant future work."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"8"
],
"paper_header_content": [
"Introduction",
"Task and Data",
"Crowdsourcing Entity Types",
"Data Analysis",
"Distant Supervision",
"Entity Linking",
"Contextualized Supervision",
"Model",
"Evaluation",
"Improving Existing Fine-Grained NER with Better Distant Supervision",
"Conclusion"
]
} | GEM-SciDuet-train-120#paper-1330#slide-25 | Example Outputs | More Examples at: https://homes.cs.washington.edu/~eunsol/_site/acl18_sample_output.html
Evaluation is still challenging : annotation coverage can be improved
Model suffers from recall problem
Joint modeling of type labels would be helpful | More Examples at: https://homes.cs.washington.edu/~eunsol/_site/acl18_sample_output.html
Evaluation is still challenging : annotation coverage can be improved
Model suffers from recall problem
Joint modeling of type labels would be helpful | [] |
GEM-SciDuet-train-121#paper-1331#slide-1 | 1331 | Improving Knowledge Graph Embedding Using Simple Constraints | Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239
],
"paper_content_text": [
"Introduction The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), e.g., Freebase (Bollacker et al., 2008) , DBpedia (Lehmann et al., 2015) , and Google's Knowledge * Corresponding author: Quan Wang.",
"Vault (Dong et al., 2014) .",
"A typical KG is a multirelational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form (head entity, relation, tail entity).",
"Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks (Wasserman-Pritsker et al., 2015; Hoffmann et al., 2011; Yang and Mitchell, 2017) .",
"Recently, the concept of knowledge graph embedding has been presented and quickly become a hot research topic.",
"The key idea there is to embed components of a KG (i.e., entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG.",
"Early works on this topic learned such vectorial representations (i.e., embeddings) via just simple models developed over KG triples (Bordes et al., 2011 (Bordes et al., , 2013 Jenatton et al., 2012; Nickel et al., 2011) .",
"Recent attempts focused on either designing more complicated triple scoring models (Socher et al., 2013; Bordes et al., 2014; Wang et al., 2014; Lin et al., 2015b; Xiao et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , or incorporating extra information beyond KG triples (Chang et al., 2014; Zhong et al., 2015; Lin et al., 2015a; Neelakantan et al., 2015; Luo et al., 2015b; Xie et al., 2016a,b; Xiao et al., 2017) .",
"See (Wang et al., 2017) for a thorough review.",
"This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task.",
"Specifically, we examine two types of constraints: (i) non-negativity constraints on entity representations and (ii) approximate entailment constraints over relation representations.",
"By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability (Murphy et al., 2012) .",
"By using the latter, we further encode regularities of logical entailment between relations into their distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction (Rocktäschel et al., 2015; Guo et al., 2016) .",
"These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity.",
"Our work has some similarities to those which integrate logical background knowledge into KG embedding (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"Most of such works, however, need grounding of first-order logic rules.",
"The grounding process could be time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"(2016) tried to model rules using only relation representations.",
"But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities.",
"Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create.",
"Minervini et al.",
"(2017b) proposed adversarial training which can integrate first-order logic rules without grounding.",
"But their work, again, focuses on strict, hard rules.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of rules.",
"But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG.",
"Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, i.e., non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities.",
"We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well.",
"Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability.",
"The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space.",
"The remainder of this paper is organized as follows.",
"We first review related work in Section 2, and then detail our approach in Section 3.",
"Experiments and results are reported in Section 4, followed by concluding remarks in Section 5.",
"Related Work Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a.",
"KG embedding.",
"Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, e.g., TransE which takes relations as translating operations between head and tail entities (Bordes et al., 2013) , and RESCAL which models triples through bilinear operations over entity and relation representations (Nickel et al., 2011) .",
"Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, e.g., the TransE extensions (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015) , the RESCAL extensions (Yang et al., 2015; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , and the (deep) neural network models (Socher et al., 2013; Bordes et al., 2014; Shi and Weninger, 2017; Schlichtkrull et al., 2017; Dettmers et al., 2018) ; (ii) those which tried to integrate extra information beyond triples, e.g., entity types Xie et al., 2016b) , relation paths (Neelakantan et al., 2015; Lin et al., 2015a) , and textual descriptions (Xie et al., 2016a; Xiao et al., 2017) .",
"Please refer to (Nickel et al., 2016a; Wang et al., 2017) for a thorough review of these techniques.",
"In this paper, we show the potential of using very simple constraints (i.e., nonnegativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity.",
"A line of research related to ours is KG embedding with logical background knowledge incorporated (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"Both works, however, can only handle strict, hard rules which usually require extensive effort to create.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of background knowledge.",
"But their work considers only equivalence and inversion between relations, which might not always be available in a given KG.",
"Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding.",
"And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs.",
"Non-negativity has long been a subject studied in various research fields.",
"Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability (Lee and Seung, 1999) .",
"In many NLP-related tasks, nonnegativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition (Murphy et al., 2012; Luo et al., 2015a; Fyshe et al., 2015) .",
"In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability.",
"Our Approach This section presents our approach.",
"We first introduce a basic embedding technique to model triples in a given KG ( § 3.1).",
"Then we discuss the nonnegativity constraints over entity representations ( § 3.2) and the approximate entailment constraints over relation representations ( § 3.3) .",
"And finally we present the overall model ( § 3.4).",
"A Basic Embedding Model We choose ComplEx (Trouillon et al., 2016) as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance.",
"Specifically, suppose we are given a KG containing a set of triples O = {(e i , r k , e j )}, with each triple composed of two entities e i , e j ∈ E and their relation r k ∈ R. Here E is the set of entities and R the set of relations.",
"ComplEx then represents each entity e ∈ E as a complex-valued vector e ∈ C d , and each relation r ∈ R a complex-valued vector r ∈ C d , where d is the dimensionality of the embedding space.",
"Each x ∈ C d consists of a real vector component Re(x) and an imaginary vector component Im(x), i.e., x = Re(x) + iIm(x).",
"For any given triple (e i , r k , e j ) ∈ E × R × E, a multilinear dot product is used to score that triple, i.e., φ(e i , r k , e j ) Re( e i , r k ,ē j ) Re( [e i ] [r k ] [ē j ] ), (1) where e i , r k , e j ∈ C d are the vectorial representations associated with e i , r k , e j , respectively;ē j is the conjugate of e j ; [·] is the -th entry of a vector; and Re(·) means taking the real part of a complex value.",
"Triples with higher φ(·, ·, ·) scores are more likely to be true.",
"Owing to the asymmetry of this scoring function, i.e., φ(e i , r k , e j ) = φ(e j , r k , e i ), ComplEx can effectively handle asymmetric relations (Trouillon et al., 2016) .",
"Non-negativity of Entity Representations On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations.",
"In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions.",
"In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation.",
"However, as pointed out by Murphy et al.",
"(2012) , it would be uneconomical to store all negative properties of an entity or a concept.",
"For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication.",
"Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations.",
"To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of [0, 1] d , as approximately Boolean embeddings (Kruszewski et al., 2015) , i.e., 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (2) where e ∈ C d is the representation for entity e ∈ E, with its real and imaginary components denoted by Re(e), Im(e) ∈ R d ; 0 and 1 are d-dimensional vectors with all their entries being 0 or 1; and ≥, ≤ , = denote the entry-wise comparisons throughout the paper whenever applicable.",
"As shown by Lee and Seung (1999) , non-negativity, in most cases, will further induce sparsity and interpretability.",
"Approximate Entailment for Relations Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations.",
"By approximate entailment, we mean an ordered pair of relations that the former approximately entails the latter, e.g., BornInCountry and Nationality, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country.",
"Each such relation pair is associated with a weight to indicate the confidence level of entailment.",
"A larger weight stands for a higher level of confidence.",
"We denote by r p λ − → r q the approximate entailment between relations r p and r q , with confidence level λ.",
"This kind of entailment can be derived automatically from a KG by modern rule mining systems (Galárraga et al., 2015) .",
"Let T denote the set of all such approximate entailments derived beforehand.",
"Before diving into approximate entailment, we first explore the modeling of strict entailment, i.e., entailment with infinite confidence level λ = +∞.",
"The strict entailment r p → r q states that if relation r p holds then relation r q must also hold.",
"This entailment can be roughly modelled by requiring φ(e i , r p , e j ) ≤ φ(e i , r q , e j ), ∀e i , e j ∈ E, (3) where φ(·, ·, ·) is the score for a triple predicted by the embedding model, defined by Eq.",
"(1).",
"Eq.",
"(3) can be interpreted as follows: for any two entities e i and e j , if (e i , r p , e j ) is a true fact with a high score φ(e i , r p , e j ), then the triple (e i , r q , e j ) with an even higher score should also be predicted as a true fact by the embedding model.",
"Note that given the non-negativity constraints defined by Eq.",
"(2), a sufficient condition for Eq.",
"(3) to hold, is to further impose Re(r p ) ≤ Re(r q ), Im(r p ) = Im(r q ), (4) where r p and r q are the complex-valued representations for r p and r q respectively, with the real and imaginary components denoted by Re(·), Im(·) ∈ R d .",
"That means, when the constraints of Eq.",
"(4) (along with those of Eq.",
"(2)) are satisfied, the requirement of Eq.",
"(3) (or in other words r p → r q ) will always hold.",
"We provide a proof of sufficiency as supplementary material.",
"Next we examine the modeling of approximate entailment.",
"To this end, we further introduce the confidence level λ and allow slackness in Eq.",
"(4), which yields λ Re(r p ) − Re(r q ) ≤ α, (5) λ Im(r p ) − Im(r q ) 2 ≤ β.",
"(6) Here α, β ≥ 0 are slack variables, and (·) 2 means an entry-wise operation.",
"Entailments with higher confidence levels show less tolerance for violating the constraints.",
"When λ = +∞, Eqs.",
"(5) -(6) degenerate to Eq.",
"(4).",
"The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible (e i , e j ) entity pairs (i.e., grounding).",
"In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty.",
"The Overall Model Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations.",
"The overall model is presented as follows: min Θ,{α,β} D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T 1 (α + β) + η Θ 2 2 , s.t.",
"λ Re(r p ) − Re(r q ) ≤ α, λ Im(r p ) − Im(r q ) 2 ≤ β, α, β ≥ 0, ∀r p λ − → r q ∈ T , 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E. (7) Here, Θ {e : e ∈ E} ∪ {r : r ∈ R} is the set of all entity and relation representations; D + and D − are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, i.e., (e i , r k , e j ) ∈ O; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, i.e., (e i , r k , e j ) or (e i , r k , e j ); y ijk = ±1 is the label (positive or negative) of triple (e i , r k , e j ).",
"In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels.",
"The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient µ ≥ 0.",
"The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied.",
"The last term is L 2 regularization to avoid over-fitting, and η ≥ 0 is the regularization coefficient.",
"To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are.",
"As such, the optimization problem of Eq.",
"(7) can be rewritten as: min Θ D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T λ1 Re(r p )−Re(r q ) + + µ T λ1 Im(r p )−Im(r q ) 2 + η Θ 2 2 , s.t.",
"0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (8) where [x] + = max(0, x) with max(·, ·) being an entry-wise operation.",
"The equivalence between Eq.",
"(7) and Eq.",
"(8) is shown in the supplementary material.",
"We use SGD in mini-batch mode as our optimizer, with AdaGrad (Duchi et al., 2011) to tune the learning rate.",
"After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of [0, 1] d , to satisfy the non-negativity constraints.",
"While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity.",
"Our approach has a space complexity of O(nd + md), which is the same as that of ComplEx.",
"Here, n is the number of entities, m the number of relations, and O(nd + md) to store a d-dimensional complex-valued vector for each entity and each relation.",
"The time complexity (per iteration) of our approach is O(sd+td+nd), where s is the average number of triples in a mini-batch,n the average number of entities in a mini-batch, and t the total number of approximate entailments in T .",
"O(sd) is to handle triples in a mini-batch, O(td) penalty terms introduced by the approximate entailments, and O(nd) further the non-negativity constraints on entity representations.",
"Usually there are much fewer entailments than triples, i.e., t s, and alsō n ≤ 2s.",
"1 So the time complexity of our approach is on a par with O(sd), i.e., the time complexity of ComplEx.",
"Experiments and Results This section presents our experiments and results.",
"We first introduce the datasets used in our experiments ( § 4.1).",
"Then we empirically evaluate our approach in the link prediction task ( § 4.2).",
"After that, we conduct extensive analysis on both entity representations ( § 4.3) and relation representations ( § 4.4) to show the interpretability of our model.",
"1 There will be at most 2s entities contained in s triples.",
"Code and data used in the experiments are available at https://github.com/iieir-km/ ComplEx-NNE_AER.",
"Datasets The first two datasets we used are WN18 and F-B15K, released by Bordes et al.",
"(2013) .",
"2 WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities.",
"We create our third dataset from the mapping-based objects of core DBpedia.",
"3 We eliminate relations not included within the DBpedia ontology such as HomePage and Logo, and discard entities appearing less than 20 times.",
"The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities.",
"Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively.",
"We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K.",
"We further use AMIE+ (Galárraga et al., 2015) 4 to extract approximate entailments automatically from the training set of each dataset.",
"As suggested by Guo et al.",
"(2018) , we consider entailments with PCA confidence higher than 0.8.",
"5 As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K.",
"Table 1 gives some examples of these approximate entailments, along with their confidence levels.",
"Table 2 further summarizes the statistics of the datasets.",
"Link Prediction We first evaluate our approach in the link prediction task, which aims to predict a triple (e i , r k , e j ) with e i or e j missing, i.e., predict e i given (r k , e j ) or predict e j given (e i , r k ).",
"Evaluation Protocol: We follow the protocol introduced by Bordes et al.",
"(2013) .",
"For each test triple (e i , r k , e j ), we replace its head entity e i with every entity e i ∈ E, and calculate a score for the corrupted triple (e i , r k , e j ), e.g., φ(e i , r k , e j ) defined by Eq.",
"(1).",
"Then we sort these scores in de- scending order, and get the rank of the correct entity e i .",
"During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, i.e., the filtered setting as described in (Bordes et al., 2013) .",
"This whole procedure is repeated while replacing the tail entity e j .",
"We report on the test set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (HITS@N), with n = 1, 3, 10.",
"Comparison Settings: We compare the performance of our approach against a variety of KG embedding models developed in recent years.",
"These models can be categorized into three groups: • Simple embedding models that utilize triples alone without integrating extra information, including TransE (Bordes et al., 2013) , Dist-Mult (Yang et al., 2015) , HolE (Nickel et al., 2016b) , ComplEx (Trouillon et al., 2016) , and ANALOGY (Liu et al., 2017) .",
"Our approach is developed on the basis of ComplEx.",
"• Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE (Guo et al., 2018) and ComplEx R (Minervini et al., 2017a) .",
"The former requires grounding of first-order logic rules.",
"The latter is restricted to relation equiv-alence and inversion, and assigns an identical confidence level to all different rules.",
"• Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and F-B15K, including R-GCN (Schlichtkrull et al., 2017) , ConvE (Dettmers et al., 2018) , and Single DistMult (Kadlec et al., 2017) .",
"6 The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models.",
"The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time.",
"We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the Non-Negativity constraints on Entity representations, i.e., optimization Eq.",
"(8) with µ = 0; and (ii) ComplEx-NNE+AER that further imposes the Approximate Entailment constraints over Relation representations besides those non-negativity ones, i.e., optimization Eq.",
"(8) with µ > 0.",
"Implementation Details: We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K.",
"We directly report their original results on these two datasets to avoid re-implementation bias.",
"On DB100K, the newly created dataset, we take the first two groups of baselines, i.e., those simple embedding models and ComplEx extensions with logical background knowledge incorporated.",
"We do not use the third group of baselines due to efficiency and complexity issues.",
"We use the code provided by Trouillon et al.",
"(2016) 7 for TransE, DistMult, and ComplEx, and the code released by their authors for ANAL-OGY 8 and RUGE 9 .",
"We re-implement HolE and ComplEx R so that all the baselines (as well as our approach) share the same optimization mode, i.e., SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.",
"10 We follow Trouillon et al.",
"(2016) to adopt a ranking loss for TransE and a logistic loss for all the other methods.",
"(Trouillon et al., 2016) .",
"Results for the other baselines are taken from the original papers.",
"Missing scores not reported in the literature are indicated by \"-\".",
"Best scores are highlighted in bold, and \" * \" indicates statistically significant improvements over ComplEx.",
"Table 4 : Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by \" * \".",
"Among those baselines, RUGE and ComplEx R require additional logical background knowledge.",
"RUGE makes use of soft rules, which are extracted by AMIE+ from the training sets.",
"As suggested by Guo et al.",
"(2018) , length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized.",
"Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8.",
"But it only considers entailments between a pair of relations, i.e., length-1 rules.",
"ComplEx R takes into account equivalence and inversion between relations.",
"We derive such axioms directly from our approximate entailments.",
"If r p λ 1 − → r q and r q λ 2 − → r p with λ 1 , λ 2 > 0.8, we think relations r p and r q are equivalent.",
"And similarly, if r −1 p λ 1 − → r q and r −1 q λ 2 − → r p with λ 1 , λ 2 > 0.8, we consider r p as an inverse of r q .",
"For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set.",
"Specifically, we tune the embedding size d ∈ {100, 150, 200}, the L 2 regularization coefficient η ∈ {0.001, 0.003, 0.01, 0.03, 0.1}, the ratio of negative over positive training examples α ∈ {2, 10}, and the initial learning rate γ ∈ {0.01, 0.05, 0.1, 0.5, 1.0}.",
"For TransE, we tune the margin of the ranking loss δ ∈ {0.1, 0.2, 0.5, 1, 2, 5, 10}.",
"Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors (Liu et al., 2017; Guo et al., 2018) .",
"After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER (µ in Eq.",
"(8)) in the range of {10 −5 , 10 −4 , · · · , 10 4 , 10 5 }, with all its other hyperparameters fixed to their optimal configurations.",
"We then directly set µ = 0 to get the optimal ComplEx-NNE model.",
"The weight of soft constraints in ComplEx R is tuned in the same range as µ.",
"The optimal configurations for our approach are: d = 200, η = 0.03, α = 10, γ = 1.0, µ = 10 on WN18; d = 200, η = 0.01, α = 10, γ = 0.5, µ = 10 −3 on FB15K; and d = 150, η = 0.03, α = 10, γ = 0.1, µ = 10 −5 on DB100K.",
"Experimental Results: Table 3 presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from previous literature.",
"Table 4 further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting.",
"On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test.",
"The reciprocal rank or HITS@N value with n = 1, 3, 10 for each test triple is used as paired data.",
"The symbol \" * \" indicates a significance level of p < 0.05.",
"The results demonstrate that imposing the nonnegativity and approximate entailment constraints indeed improves KG embedding.",
"ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18).",
"More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplEx R , and even the complicated developments or implementations like ConvE or Single DistMult.",
"This demonstrates the superiority of our approach.",
"Analysis on Entity Representations This section inspects how the structure of the entity embedding space changes when the constraints are imposed.",
"We first provide the visualization of entity representations on DB100K.",
"On this dataset each entity is associated with a single type label.",
"11 We pick 4 types reptile, wine region, species, and programming language, and randomly select 30 entities from each type.",
"Figure 1 visualizes with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated.",
"Then we investigate the semantic purity of these dimensions.",
"Specifically, we collect the representations of all the entities on DB100K (real components only).",
"For each dimension of these representations, top K percent of entities with the highest activation values on this dimension are picked.",
"We can calculate the entropy of the type distribution of the entities selected.",
"This entropy reflects diversity of entity types, or in other words, semantic purity.",
"If all the K percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity).",
"On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity).",
"Figure 2 AER, as K varies.",
"We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity.",
"We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena.",
"The results are given as supplementary material.",
"Analysis on Relation Representations This section further provides a visual inspection of the relation embedding space when the constraints are imposed.",
"To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.",
"12 We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure 3 , where for each relation we randomly pick 5 dimensions from both its real and imaginary components.",
"By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well.",
"Pairs of relations from the first class (equivalence) tend to have identical representations r p ≈ r q , those from the second class (inversion) complex conjugate representations r p ≈r q ; and the others representations that Re(r p ) ≤ Re(r q ) and Im(r p ) ≈ Im(r q ).",
"12 Equivalence and inversion are detected using heuristics introduced in § 4.2 (implementation details).",
"See the supplementary material for detailed properties of these three classes.",
"Conclusion This paper investigates the potential of using very simple constraints to improve KG embedding.",
"Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations.",
"Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity.",
"Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines.",
"The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space.",
"Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek.",
"2014"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Our Approach",
"A Basic Embedding Model",
"Non-negativity of Entity Representations",
"Approximate Entailment for Relations",
"The Overall Model",
"Experiments and Results",
"Datasets",
"Link Prediction",
"Analysis on Entity Representations",
"Analysis on Relation Representations",
"Conclusion"
]
} | GEM-SciDuet-train-121#paper-1331#slide-1 | Knowledge graph | A directed graph composed of entities (nodes) and relations (edges)
(Cristiano Ronaldo, bornIn, Funchal)
(Cristiano Ronaldo, playsFor, Real Madrid)
(Cristiano Ronaldo, teammates, Sergio Ramos)
(Sergio Ramos, bornIn, Camas)
(Sergio Ramos, playsFor, Real Madrid)
(Real Madrid, locatedIn, Spain) | A directed graph composed of entities (nodes) and relations (edges)
(Cristiano Ronaldo, bornIn, Funchal)
(Cristiano Ronaldo, playsFor, Real Madrid)
(Cristiano Ronaldo, teammates, Sergio Ramos)
(Sergio Ramos, bornIn, Camas)
(Sergio Ramos, playsFor, Real Madrid)
(Real Madrid, locatedIn, Spain) | [] |
GEM-SciDuet-train-121#paper-1331#slide-2 | 1331 | Improving Knowledge Graph Embedding Using Simple Constraints | Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239
],
"paper_content_text": [
"Introduction The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), e.g., Freebase (Bollacker et al., 2008) , DBpedia (Lehmann et al., 2015) , and Google's Knowledge * Corresponding author: Quan Wang.",
"Vault (Dong et al., 2014) .",
"A typical KG is a multirelational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form (head entity, relation, tail entity).",
"Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks (Wasserman-Pritsker et al., 2015; Hoffmann et al., 2011; Yang and Mitchell, 2017) .",
"Recently, the concept of knowledge graph embedding has been presented and quickly become a hot research topic.",
"The key idea there is to embed components of a KG (i.e., entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG.",
"Early works on this topic learned such vectorial representations (i.e., embeddings) via just simple models developed over KG triples (Bordes et al., 2011 (Bordes et al., , 2013 Jenatton et al., 2012; Nickel et al., 2011) .",
"Recent attempts focused on either designing more complicated triple scoring models (Socher et al., 2013; Bordes et al., 2014; Wang et al., 2014; Lin et al., 2015b; Xiao et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , or incorporating extra information beyond KG triples (Chang et al., 2014; Zhong et al., 2015; Lin et al., 2015a; Neelakantan et al., 2015; Luo et al., 2015b; Xie et al., 2016a,b; Xiao et al., 2017) .",
"See (Wang et al., 2017) for a thorough review.",
"This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task.",
"Specifically, we examine two types of constraints: (i) non-negativity constraints on entity representations and (ii) approximate entailment constraints over relation representations.",
"By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability (Murphy et al., 2012) .",
"By using the latter, we further encode regularities of logical entailment between relations into their distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction (Rocktäschel et al., 2015; Guo et al., 2016) .",
"These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity.",
"Our work has some similarities to those which integrate logical background knowledge into KG embedding (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"Most of such works, however, need grounding of first-order logic rules.",
"The grounding process could be time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"(2016) tried to model rules using only relation representations.",
"But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities.",
"Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create.",
"Minervini et al.",
"(2017b) proposed adversarial training which can integrate first-order logic rules without grounding.",
"But their work, again, focuses on strict, hard rules.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of rules.",
"But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG.",
"Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, i.e., non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities.",
"We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well.",
"Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability.",
"The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space.",
"The remainder of this paper is organized as follows.",
"We first review related work in Section 2, and then detail our approach in Section 3.",
"Experiments and results are reported in Section 4, followed by concluding remarks in Section 5.",
"Related Work Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a.",
"KG embedding.",
"Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, e.g., TransE which takes relations as translating operations between head and tail entities (Bordes et al., 2013) , and RESCAL which models triples through bilinear operations over entity and relation representations (Nickel et al., 2011) .",
"Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, e.g., the TransE extensions (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015) , the RESCAL extensions (Yang et al., 2015; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , and the (deep) neural network models (Socher et al., 2013; Bordes et al., 2014; Shi and Weninger, 2017; Schlichtkrull et al., 2017; Dettmers et al., 2018) ; (ii) those which tried to integrate extra information beyond triples, e.g., entity types Xie et al., 2016b) , relation paths (Neelakantan et al., 2015; Lin et al., 2015a) , and textual descriptions (Xie et al., 2016a; Xiao et al., 2017) .",
"Please refer to (Nickel et al., 2016a; Wang et al., 2017) for a thorough review of these techniques.",
"In this paper, we show the potential of using very simple constraints (i.e., nonnegativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity.",
"A line of research related to ours is KG embedding with logical background knowledge incorporated (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"Both works, however, can only handle strict, hard rules which usually require extensive effort to create.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of background knowledge.",
"But their work considers only equivalence and inversion between relations, which might not always be available in a given KG.",
"Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding.",
"And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs.",
"Non-negativity has long been a subject studied in various research fields.",
"Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability (Lee and Seung, 1999) .",
"In many NLP-related tasks, nonnegativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition (Murphy et al., 2012; Luo et al., 2015a; Fyshe et al., 2015) .",
"In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability.",
"Our Approach This section presents our approach.",
"We first introduce a basic embedding technique to model triples in a given KG ( § 3.1).",
"Then we discuss the nonnegativity constraints over entity representations ( § 3.2) and the approximate entailment constraints over relation representations ( § 3.3) .",
"And finally we present the overall model ( § 3.4).",
"A Basic Embedding Model We choose ComplEx (Trouillon et al., 2016) as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance.",
"Specifically, suppose we are given a KG containing a set of triples O = {(e i , r k , e j )}, with each triple composed of two entities e i , e j ∈ E and their relation r k ∈ R. Here E is the set of entities and R the set of relations.",
"ComplEx then represents each entity e ∈ E as a complex-valued vector e ∈ C d , and each relation r ∈ R a complex-valued vector r ∈ C d , where d is the dimensionality of the embedding space.",
"Each x ∈ C d consists of a real vector component Re(x) and an imaginary vector component Im(x), i.e., x = Re(x) + iIm(x).",
"For any given triple (e i , r k , e j ) ∈ E × R × E, a multilinear dot product is used to score that triple, i.e., φ(e i , r k , e j ) Re( e i , r k ,ē j ) Re( [e i ] [r k ] [ē j ] ), (1) where e i , r k , e j ∈ C d are the vectorial representations associated with e i , r k , e j , respectively;ē j is the conjugate of e j ; [·] is the -th entry of a vector; and Re(·) means taking the real part of a complex value.",
"Triples with higher φ(·, ·, ·) scores are more likely to be true.",
"Owing to the asymmetry of this scoring function, i.e., φ(e i , r k , e j ) = φ(e j , r k , e i ), ComplEx can effectively handle asymmetric relations (Trouillon et al., 2016) .",
"Non-negativity of Entity Representations On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations.",
"In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions.",
"In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation.",
"However, as pointed out by Murphy et al.",
"(2012) , it would be uneconomical to store all negative properties of an entity or a concept.",
"For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication.",
"Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations.",
"To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of [0, 1] d , as approximately Boolean embeddings (Kruszewski et al., 2015) , i.e., 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (2) where e ∈ C d is the representation for entity e ∈ E, with its real and imaginary components denoted by Re(e), Im(e) ∈ R d ; 0 and 1 are d-dimensional vectors with all their entries being 0 or 1; and ≥, ≤ , = denote the entry-wise comparisons throughout the paper whenever applicable.",
"As shown by Lee and Seung (1999) , non-negativity, in most cases, will further induce sparsity and interpretability.",
"Approximate Entailment for Relations Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations.",
"By approximate entailment, we mean an ordered pair of relations that the former approximately entails the latter, e.g., BornInCountry and Nationality, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country.",
"Each such relation pair is associated with a weight to indicate the confidence level of entailment.",
"A larger weight stands for a higher level of confidence.",
"We denote by r p λ − → r q the approximate entailment between relations r p and r q , with confidence level λ.",
"This kind of entailment can be derived automatically from a KG by modern rule mining systems (Galárraga et al., 2015) .",
"Let T denote the set of all such approximate entailments derived beforehand.",
"Before diving into approximate entailment, we first explore the modeling of strict entailment, i.e., entailment with infinite confidence level λ = +∞.",
"The strict entailment r p → r q states that if relation r p holds then relation r q must also hold.",
"This entailment can be roughly modelled by requiring φ(e i , r p , e j ) ≤ φ(e i , r q , e j ), ∀e i , e j ∈ E, (3) where φ(·, ·, ·) is the score for a triple predicted by the embedding model, defined by Eq.",
"(1).",
"Eq.",
"(3) can be interpreted as follows: for any two entities e i and e j , if (e i , r p , e j ) is a true fact with a high score φ(e i , r p , e j ), then the triple (e i , r q , e j ) with an even higher score should also be predicted as a true fact by the embedding model.",
"Note that given the non-negativity constraints defined by Eq.",
"(2), a sufficient condition for Eq.",
"(3) to hold, is to further impose Re(r p ) ≤ Re(r q ), Im(r p ) = Im(r q ), (4) where r p and r q are the complex-valued representations for r p and r q respectively, with the real and imaginary components denoted by Re(·), Im(·) ∈ R d .",
"That means, when the constraints of Eq.",
"(4) (along with those of Eq.",
"(2)) are satisfied, the requirement of Eq.",
"(3) (or in other words r p → r q ) will always hold.",
"We provide a proof of sufficiency as supplementary material.",
"Next we examine the modeling of approximate entailment.",
"To this end, we further introduce the confidence level λ and allow slackness in Eq.",
"(4), which yields λ Re(r p ) − Re(r q ) ≤ α, (5) λ Im(r p ) − Im(r q ) 2 ≤ β.",
"(6) Here α, β ≥ 0 are slack variables, and (·) 2 means an entry-wise operation.",
"Entailments with higher confidence levels show less tolerance for violating the constraints.",
"When λ = +∞, Eqs.",
"(5) -(6) degenerate to Eq.",
"(4).",
"The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible (e i , e j ) entity pairs (i.e., grounding).",
"In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty.",
"The Overall Model Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations.",
"The overall model is presented as follows: min Θ,{α,β} D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T 1 (α + β) + η Θ 2 2 , s.t.",
"λ Re(r p ) − Re(r q ) ≤ α, λ Im(r p ) − Im(r q ) 2 ≤ β, α, β ≥ 0, ∀r p λ − → r q ∈ T , 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E. (7) Here, Θ {e : e ∈ E} ∪ {r : r ∈ R} is the set of all entity and relation representations; D + and D − are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, i.e., (e i , r k , e j ) ∈ O; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, i.e., (e i , r k , e j ) or (e i , r k , e j ); y ijk = ±1 is the label (positive or negative) of triple (e i , r k , e j ).",
"In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels.",
"The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient µ ≥ 0.",
"The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied.",
"The last term is L 2 regularization to avoid over-fitting, and η ≥ 0 is the regularization coefficient.",
"To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are.",
"As such, the optimization problem of Eq.",
"(7) can be rewritten as: min Θ D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T λ1 Re(r p )−Re(r q ) + + µ T λ1 Im(r p )−Im(r q ) 2 + η Θ 2 2 , s.t.",
"0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (8) where [x] + = max(0, x) with max(·, ·) being an entry-wise operation.",
"The equivalence between Eq.",
"(7) and Eq.",
"(8) is shown in the supplementary material.",
"We use SGD in mini-batch mode as our optimizer, with AdaGrad (Duchi et al., 2011) to tune the learning rate.",
"After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of [0, 1] d , to satisfy the non-negativity constraints.",
"While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity.",
"Our approach has a space complexity of O(nd + md), which is the same as that of ComplEx.",
"Here, n is the number of entities, m the number of relations, and O(nd + md) to store a d-dimensional complex-valued vector for each entity and each relation.",
"The time complexity (per iteration) of our approach is O(sd+td+nd), where s is the average number of triples in a mini-batch,n the average number of entities in a mini-batch, and t the total number of approximate entailments in T .",
"O(sd) is to handle triples in a mini-batch, O(td) penalty terms introduced by the approximate entailments, and O(nd) further the non-negativity constraints on entity representations.",
"Usually there are much fewer entailments than triples, i.e., t s, and alsō n ≤ 2s.",
"1 So the time complexity of our approach is on a par with O(sd), i.e., the time complexity of ComplEx.",
"Experiments and Results This section presents our experiments and results.",
"We first introduce the datasets used in our experiments ( § 4.1).",
"Then we empirically evaluate our approach in the link prediction task ( § 4.2).",
"After that, we conduct extensive analysis on both entity representations ( § 4.3) and relation representations ( § 4.4) to show the interpretability of our model.",
"1 There will be at most 2s entities contained in s triples.",
"Code and data used in the experiments are available at https://github.com/iieir-km/ ComplEx-NNE_AER.",
"Datasets The first two datasets we used are WN18 and F-B15K, released by Bordes et al.",
"(2013) .",
"2 WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities.",
"We create our third dataset from the mapping-based objects of core DBpedia.",
"3 We eliminate relations not included within the DBpedia ontology such as HomePage and Logo, and discard entities appearing less than 20 times.",
"The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities.",
"Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively.",
"We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K.",
"We further use AMIE+ (Galárraga et al., 2015) 4 to extract approximate entailments automatically from the training set of each dataset.",
"As suggested by Guo et al.",
"(2018) , we consider entailments with PCA confidence higher than 0.8.",
"5 As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K.",
"Table 1 gives some examples of these approximate entailments, along with their confidence levels.",
"Table 2 further summarizes the statistics of the datasets.",
"Link Prediction We first evaluate our approach in the link prediction task, which aims to predict a triple (e i , r k , e j ) with e i or e j missing, i.e., predict e i given (r k , e j ) or predict e j given (e i , r k ).",
"Evaluation Protocol: We follow the protocol introduced by Bordes et al.",
"(2013) .",
"For each test triple (e i , r k , e j ), we replace its head entity e i with every entity e i ∈ E, and calculate a score for the corrupted triple (e i , r k , e j ), e.g., φ(e i , r k , e j ) defined by Eq.",
"(1).",
"Then we sort these scores in de- scending order, and get the rank of the correct entity e i .",
"During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, i.e., the filtered setting as described in (Bordes et al., 2013) .",
"This whole procedure is repeated while replacing the tail entity e j .",
"We report on the test set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (HITS@N), with n = 1, 3, 10.",
"Comparison Settings: We compare the performance of our approach against a variety of KG embedding models developed in recent years.",
"These models can be categorized into three groups: • Simple embedding models that utilize triples alone without integrating extra information, including TransE (Bordes et al., 2013) , Dist-Mult (Yang et al., 2015) , HolE (Nickel et al., 2016b) , ComplEx (Trouillon et al., 2016) , and ANALOGY (Liu et al., 2017) .",
"Our approach is developed on the basis of ComplEx.",
"• Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE (Guo et al., 2018) and ComplEx R (Minervini et al., 2017a) .",
"The former requires grounding of first-order logic rules.",
"The latter is restricted to relation equiv-alence and inversion, and assigns an identical confidence level to all different rules.",
"• Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and F-B15K, including R-GCN (Schlichtkrull et al., 2017) , ConvE (Dettmers et al., 2018) , and Single DistMult (Kadlec et al., 2017) .",
"6 The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models.",
"The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time.",
"We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the Non-Negativity constraints on Entity representations, i.e., optimization Eq.",
"(8) with µ = 0; and (ii) ComplEx-NNE+AER that further imposes the Approximate Entailment constraints over Relation representations besides those non-negativity ones, i.e., optimization Eq.",
"(8) with µ > 0.",
"Implementation Details: We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K.",
"We directly report their original results on these two datasets to avoid re-implementation bias.",
"On DB100K, the newly created dataset, we take the first two groups of baselines, i.e., those simple embedding models and ComplEx extensions with logical background knowledge incorporated.",
"We do not use the third group of baselines due to efficiency and complexity issues.",
"We use the code provided by Trouillon et al.",
"(2016) 7 for TransE, DistMult, and ComplEx, and the code released by their authors for ANAL-OGY 8 and RUGE 9 .",
"We re-implement HolE and ComplEx R so that all the baselines (as well as our approach) share the same optimization mode, i.e., SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.",
"10 We follow Trouillon et al.",
"(2016) to adopt a ranking loss for TransE and a logistic loss for all the other methods.",
"(Trouillon et al., 2016) .",
"Results for the other baselines are taken from the original papers.",
"Missing scores not reported in the literature are indicated by \"-\".",
"Best scores are highlighted in bold, and \" * \" indicates statistically significant improvements over ComplEx.",
"Table 4 : Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by \" * \".",
"Among those baselines, RUGE and ComplEx R require additional logical background knowledge.",
"RUGE makes use of soft rules, which are extracted by AMIE+ from the training sets.",
"As suggested by Guo et al.",
"(2018) , length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized.",
"Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8.",
"But it only considers entailments between a pair of relations, i.e., length-1 rules.",
"ComplEx R takes into account equivalence and inversion between relations.",
"We derive such axioms directly from our approximate entailments.",
"If r p λ 1 − → r q and r q λ 2 − → r p with λ 1 , λ 2 > 0.8, we think relations r p and r q are equivalent.",
"And similarly, if r −1 p λ 1 − → r q and r −1 q λ 2 − → r p with λ 1 , λ 2 > 0.8, we consider r p as an inverse of r q .",
"For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set.",
"Specifically, we tune the embedding size d ∈ {100, 150, 200}, the L 2 regularization coefficient η ∈ {0.001, 0.003, 0.01, 0.03, 0.1}, the ratio of negative over positive training examples α ∈ {2, 10}, and the initial learning rate γ ∈ {0.01, 0.05, 0.1, 0.5, 1.0}.",
"For TransE, we tune the margin of the ranking loss δ ∈ {0.1, 0.2, 0.5, 1, 2, 5, 10}.",
"Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors (Liu et al., 2017; Guo et al., 2018) .",
"After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER (µ in Eq.",
"(8)) in the range of {10 −5 , 10 −4 , · · · , 10 4 , 10 5 }, with all its other hyperparameters fixed to their optimal configurations.",
"We then directly set µ = 0 to get the optimal ComplEx-NNE model.",
"The weight of soft constraints in ComplEx R is tuned in the same range as µ.",
"The optimal configurations for our approach are: d = 200, η = 0.03, α = 10, γ = 1.0, µ = 10 on WN18; d = 200, η = 0.01, α = 10, γ = 0.5, µ = 10 −3 on FB15K; and d = 150, η = 0.03, α = 10, γ = 0.1, µ = 10 −5 on DB100K.",
"Experimental Results: Table 3 presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from previous literature.",
"Table 4 further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting.",
"On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test.",
"The reciprocal rank or HITS@N value with n = 1, 3, 10 for each test triple is used as paired data.",
"The symbol \" * \" indicates a significance level of p < 0.05.",
"The results demonstrate that imposing the nonnegativity and approximate entailment constraints indeed improves KG embedding.",
"ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18).",
"More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplEx R , and even the complicated developments or implementations like ConvE or Single DistMult.",
"This demonstrates the superiority of our approach.",
"Analysis on Entity Representations This section inspects how the structure of the entity embedding space changes when the constraints are imposed.",
"We first provide the visualization of entity representations on DB100K.",
"On this dataset each entity is associated with a single type label.",
"11 We pick 4 types reptile, wine region, species, and programming language, and randomly select 30 entities from each type.",
"Figure 1 visualizes with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated.",
"Then we investigate the semantic purity of these dimensions.",
"Specifically, we collect the representations of all the entities on DB100K (real components only).",
"For each dimension of these representations, top K percent of entities with the highest activation values on this dimension are picked.",
"We can calculate the entropy of the type distribution of the entities selected.",
"This entropy reflects diversity of entity types, or in other words, semantic purity.",
"If all the K percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity).",
"On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity).",
"Figure 2 AER, as K varies.",
"We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity.",
"We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena.",
"The results are given as supplementary material.",
"Analysis on Relation Representations This section further provides a visual inspection of the relation embedding space when the constraints are imposed.",
"To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.",
"12 We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure 3 , where for each relation we randomly pick 5 dimensions from both its real and imaginary components.",
"By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well.",
"Pairs of relations from the first class (equivalence) tend to have identical representations r p ≈ r q , those from the second class (inversion) complex conjugate representations r p ≈r q ; and the others representations that Re(r p ) ≤ Re(r q ) and Im(r p ) ≈ Im(r q ).",
"12 Equivalence and inversion are detected using heuristics introduced in § 4.2 (implementation details).",
"See the supplementary material for detailed properties of these three classes.",
"Conclusion This paper investigates the potential of using very simple constraints to improve KG embedding.",
"Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations.",
"Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity.",
"Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines.",
"The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space.",
"Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek.",
"2014"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Our Approach",
"A Basic Embedding Model",
"Non-negativity of Entity Representations",
"Approximate Entailment for Relations",
"The Overall Model",
"Experiments and Results",
"Datasets",
"Link Prediction",
"Analysis on Entity Representations",
"Analysis on Relation Representations",
"Conclusion"
]
} | GEM-SciDuet-train-121#paper-1331#slide-2 | Knowledge graph embedding | Learn to represent entities and relations in continuous vector spaces
Entities as points in vector spaces (vectors)
Relations as operations between entities | Learn to represent entities and relations in continuous vector spaces
Entities as points in vector spaces (vectors)
Relations as operations between entities | [] |
GEM-SciDuet-train-121#paper-1331#slide-3 | 1331 | Improving Knowledge Graph Embedding Using Simple Constraints | Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239
],
"paper_content_text": [
"Introduction The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), e.g., Freebase (Bollacker et al., 2008) , DBpedia (Lehmann et al., 2015) , and Google's Knowledge * Corresponding author: Quan Wang.",
"Vault (Dong et al., 2014) .",
"A typical KG is a multirelational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form (head entity, relation, tail entity).",
"Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks (Wasserman-Pritsker et al., 2015; Hoffmann et al., 2011; Yang and Mitchell, 2017) .",
"Recently, the concept of knowledge graph embedding has been presented and quickly become a hot research topic.",
"The key idea there is to embed components of a KG (i.e., entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG.",
"Early works on this topic learned such vectorial representations (i.e., embeddings) via just simple models developed over KG triples (Bordes et al., 2011 (Bordes et al., , 2013 Jenatton et al., 2012; Nickel et al., 2011) .",
"Recent attempts focused on either designing more complicated triple scoring models (Socher et al., 2013; Bordes et al., 2014; Wang et al., 2014; Lin et al., 2015b; Xiao et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , or incorporating extra information beyond KG triples (Chang et al., 2014; Zhong et al., 2015; Lin et al., 2015a; Neelakantan et al., 2015; Luo et al., 2015b; Xie et al., 2016a,b; Xiao et al., 2017) .",
"See (Wang et al., 2017) for a thorough review.",
"This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task.",
"Specifically, we examine two types of constraints: (i) non-negativity constraints on entity representations and (ii) approximate entailment constraints over relation representations.",
"By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability (Murphy et al., 2012) .",
"By using the latter, we further encode regularities of logical entailment between relations into their distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction (Rocktäschel et al., 2015; Guo et al., 2016) .",
"These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity.",
"Our work has some similarities to those which integrate logical background knowledge into KG embedding (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"Most of such works, however, need grounding of first-order logic rules.",
"The grounding process could be time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"(2016) tried to model rules using only relation representations.",
"But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities.",
"Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create.",
"Minervini et al.",
"(2017b) proposed adversarial training which can integrate first-order logic rules without grounding.",
"But their work, again, focuses on strict, hard rules.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of rules.",
"But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG.",
"Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, i.e., non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities.",
"We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well.",
"Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability.",
"The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space.",
"The remainder of this paper is organized as follows.",
"We first review related work in Section 2, and then detail our approach in Section 3.",
"Experiments and results are reported in Section 4, followed by concluding remarks in Section 5.",
"Related Work Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a.",
"KG embedding.",
"Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, e.g., TransE which takes relations as translating operations between head and tail entities (Bordes et al., 2013) , and RESCAL which models triples through bilinear operations over entity and relation representations (Nickel et al., 2011) .",
"Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, e.g., the TransE extensions (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015) , the RESCAL extensions (Yang et al., 2015; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , and the (deep) neural network models (Socher et al., 2013; Bordes et al., 2014; Shi and Weninger, 2017; Schlichtkrull et al., 2017; Dettmers et al., 2018) ; (ii) those which tried to integrate extra information beyond triples, e.g., entity types Xie et al., 2016b) , relation paths (Neelakantan et al., 2015; Lin et al., 2015a) , and textual descriptions (Xie et al., 2016a; Xiao et al., 2017) .",
"Please refer to (Nickel et al., 2016a; Wang et al., 2017) for a thorough review of these techniques.",
"In this paper, we show the potential of using very simple constraints (i.e., nonnegativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity.",
"A line of research related to ours is KG embedding with logical background knowledge incorporated (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"Both works, however, can only handle strict, hard rules which usually require extensive effort to create.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of background knowledge.",
"But their work considers only equivalence and inversion between relations, which might not always be available in a given KG.",
"Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding.",
"And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs.",
"Non-negativity has long been a subject studied in various research fields.",
"Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability (Lee and Seung, 1999) .",
"In many NLP-related tasks, nonnegativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition (Murphy et al., 2012; Luo et al., 2015a; Fyshe et al., 2015) .",
"In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability.",
"Our Approach This section presents our approach.",
"We first introduce a basic embedding technique to model triples in a given KG ( § 3.1).",
"Then we discuss the nonnegativity constraints over entity representations ( § 3.2) and the approximate entailment constraints over relation representations ( § 3.3) .",
"And finally we present the overall model ( § 3.4).",
"A Basic Embedding Model We choose ComplEx (Trouillon et al., 2016) as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance.",
"Specifically, suppose we are given a KG containing a set of triples O = {(e i , r k , e j )}, with each triple composed of two entities e i , e j ∈ E and their relation r k ∈ R. Here E is the set of entities and R the set of relations.",
"ComplEx then represents each entity e ∈ E as a complex-valued vector e ∈ C d , and each relation r ∈ R a complex-valued vector r ∈ C d , where d is the dimensionality of the embedding space.",
"Each x ∈ C d consists of a real vector component Re(x) and an imaginary vector component Im(x), i.e., x = Re(x) + iIm(x).",
"For any given triple (e i , r k , e j ) ∈ E × R × E, a multilinear dot product is used to score that triple, i.e., φ(e i , r k , e j ) Re( e i , r k ,ē j ) Re( [e i ] [r k ] [ē j ] ), (1) where e i , r k , e j ∈ C d are the vectorial representations associated with e i , r k , e j , respectively;ē j is the conjugate of e j ; [·] is the -th entry of a vector; and Re(·) means taking the real part of a complex value.",
"Triples with higher φ(·, ·, ·) scores are more likely to be true.",
"Owing to the asymmetry of this scoring function, i.e., φ(e i , r k , e j ) = φ(e j , r k , e i ), ComplEx can effectively handle asymmetric relations (Trouillon et al., 2016) .",
"Non-negativity of Entity Representations On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations.",
"In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions.",
"In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation.",
"However, as pointed out by Murphy et al.",
"(2012) , it would be uneconomical to store all negative properties of an entity or a concept.",
"For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication.",
"Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations.",
"To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of [0, 1] d , as approximately Boolean embeddings (Kruszewski et al., 2015) , i.e., 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (2) where e ∈ C d is the representation for entity e ∈ E, with its real and imaginary components denoted by Re(e), Im(e) ∈ R d ; 0 and 1 are d-dimensional vectors with all their entries being 0 or 1; and ≥, ≤ , = denote the entry-wise comparisons throughout the paper whenever applicable.",
"As shown by Lee and Seung (1999) , non-negativity, in most cases, will further induce sparsity and interpretability.",
"Approximate Entailment for Relations Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations.",
"By approximate entailment, we mean an ordered pair of relations that the former approximately entails the latter, e.g., BornInCountry and Nationality, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country.",
"Each such relation pair is associated with a weight to indicate the confidence level of entailment.",
"A larger weight stands for a higher level of confidence.",
"We denote by r p λ − → r q the approximate entailment between relations r p and r q , with confidence level λ.",
"This kind of entailment can be derived automatically from a KG by modern rule mining systems (Galárraga et al., 2015) .",
"Let T denote the set of all such approximate entailments derived beforehand.",
"Before diving into approximate entailment, we first explore the modeling of strict entailment, i.e., entailment with infinite confidence level λ = +∞.",
"The strict entailment r p → r q states that if relation r p holds then relation r q must also hold.",
"This entailment can be roughly modelled by requiring φ(e i , r p , e j ) ≤ φ(e i , r q , e j ), ∀e i , e j ∈ E, (3) where φ(·, ·, ·) is the score for a triple predicted by the embedding model, defined by Eq.",
"(1).",
"Eq.",
"(3) can be interpreted as follows: for any two entities e i and e j , if (e i , r p , e j ) is a true fact with a high score φ(e i , r p , e j ), then the triple (e i , r q , e j ) with an even higher score should also be predicted as a true fact by the embedding model.",
"Note that given the non-negativity constraints defined by Eq.",
"(2), a sufficient condition for Eq.",
"(3) to hold, is to further impose Re(r p ) ≤ Re(r q ), Im(r p ) = Im(r q ), (4) where r p and r q are the complex-valued representations for r p and r q respectively, with the real and imaginary components denoted by Re(·), Im(·) ∈ R d .",
"That means, when the constraints of Eq.",
"(4) (along with those of Eq.",
"(2)) are satisfied, the requirement of Eq.",
"(3) (or in other words r p → r q ) will always hold.",
"We provide a proof of sufficiency as supplementary material.",
"Next we examine the modeling of approximate entailment.",
"To this end, we further introduce the confidence level λ and allow slackness in Eq.",
"(4), which yields λ Re(r p ) − Re(r q ) ≤ α, (5) λ Im(r p ) − Im(r q ) 2 ≤ β.",
"(6) Here α, β ≥ 0 are slack variables, and (·) 2 means an entry-wise operation.",
"Entailments with higher confidence levels show less tolerance for violating the constraints.",
"When λ = +∞, Eqs.",
"(5) -(6) degenerate to Eq.",
"(4).",
"The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible (e i , e j ) entity pairs (i.e., grounding).",
"In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty.",
"The Overall Model Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations.",
"The overall model is presented as follows: min Θ,{α,β} D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T 1 (α + β) + η Θ 2 2 , s.t.",
"λ Re(r p ) − Re(r q ) ≤ α, λ Im(r p ) − Im(r q ) 2 ≤ β, α, β ≥ 0, ∀r p λ − → r q ∈ T , 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E. (7) Here, Θ {e : e ∈ E} ∪ {r : r ∈ R} is the set of all entity and relation representations; D + and D − are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, i.e., (e i , r k , e j ) ∈ O; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, i.e., (e i , r k , e j ) or (e i , r k , e j ); y ijk = ±1 is the label (positive or negative) of triple (e i , r k , e j ).",
"In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels.",
"The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient µ ≥ 0.",
"The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied.",
"The last term is L 2 regularization to avoid over-fitting, and η ≥ 0 is the regularization coefficient.",
"To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are.",
"As such, the optimization problem of Eq.",
"(7) can be rewritten as: min Θ D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T λ1 Re(r p )−Re(r q ) + + µ T λ1 Im(r p )−Im(r q ) 2 + η Θ 2 2 , s.t.",
"0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (8) where [x] + = max(0, x) with max(·, ·) being an entry-wise operation.",
"The equivalence between Eq.",
"(7) and Eq.",
"(8) is shown in the supplementary material.",
"We use SGD in mini-batch mode as our optimizer, with AdaGrad (Duchi et al., 2011) to tune the learning rate.",
"After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of [0, 1] d , to satisfy the non-negativity constraints.",
"While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity.",
"Our approach has a space complexity of O(nd + md), which is the same as that of ComplEx.",
"Here, n is the number of entities, m the number of relations, and O(nd + md) to store a d-dimensional complex-valued vector for each entity and each relation.",
"The time complexity (per iteration) of our approach is O(sd+td+nd), where s is the average number of triples in a mini-batch,n the average number of entities in a mini-batch, and t the total number of approximate entailments in T .",
"O(sd) is to handle triples in a mini-batch, O(td) penalty terms introduced by the approximate entailments, and O(nd) further the non-negativity constraints on entity representations.",
"Usually there are much fewer entailments than triples, i.e., t s, and alsō n ≤ 2s.",
"1 So the time complexity of our approach is on a par with O(sd), i.e., the time complexity of ComplEx.",
"Experiments and Results This section presents our experiments and results.",
"We first introduce the datasets used in our experiments ( § 4.1).",
"Then we empirically evaluate our approach in the link prediction task ( § 4.2).",
"After that, we conduct extensive analysis on both entity representations ( § 4.3) and relation representations ( § 4.4) to show the interpretability of our model.",
"1 There will be at most 2s entities contained in s triples.",
"Code and data used in the experiments are available at https://github.com/iieir-km/ ComplEx-NNE_AER.",
"Datasets The first two datasets we used are WN18 and F-B15K, released by Bordes et al.",
"(2013) .",
"2 WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities.",
"We create our third dataset from the mapping-based objects of core DBpedia.",
"3 We eliminate relations not included within the DBpedia ontology such as HomePage and Logo, and discard entities appearing less than 20 times.",
"The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities.",
"Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively.",
"We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K.",
"We further use AMIE+ (Galárraga et al., 2015) 4 to extract approximate entailments automatically from the training set of each dataset.",
"As suggested by Guo et al.",
"(2018) , we consider entailments with PCA confidence higher than 0.8.",
"5 As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K.",
"Table 1 gives some examples of these approximate entailments, along with their confidence levels.",
"Table 2 further summarizes the statistics of the datasets.",
"Link Prediction We first evaluate our approach in the link prediction task, which aims to predict a triple (e i , r k , e j ) with e i or e j missing, i.e., predict e i given (r k , e j ) or predict e j given (e i , r k ).",
"Evaluation Protocol: We follow the protocol introduced by Bordes et al.",
"(2013) .",
"For each test triple (e i , r k , e j ), we replace its head entity e i with every entity e i ∈ E, and calculate a score for the corrupted triple (e i , r k , e j ), e.g., φ(e i , r k , e j ) defined by Eq.",
"(1).",
"Then we sort these scores in de- scending order, and get the rank of the correct entity e i .",
"During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, i.e., the filtered setting as described in (Bordes et al., 2013) .",
"This whole procedure is repeated while replacing the tail entity e j .",
"We report on the test set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (HITS@N), with n = 1, 3, 10.",
"Comparison Settings: We compare the performance of our approach against a variety of KG embedding models developed in recent years.",
"These models can be categorized into three groups: • Simple embedding models that utilize triples alone without integrating extra information, including TransE (Bordes et al., 2013) , Dist-Mult (Yang et al., 2015) , HolE (Nickel et al., 2016b) , ComplEx (Trouillon et al., 2016) , and ANALOGY (Liu et al., 2017) .",
"Our approach is developed on the basis of ComplEx.",
"• Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE (Guo et al., 2018) and ComplEx R (Minervini et al., 2017a) .",
"The former requires grounding of first-order logic rules.",
"The latter is restricted to relation equiv-alence and inversion, and assigns an identical confidence level to all different rules.",
"• Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and F-B15K, including R-GCN (Schlichtkrull et al., 2017) , ConvE (Dettmers et al., 2018) , and Single DistMult (Kadlec et al., 2017) .",
"6 The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models.",
"The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time.",
"We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the Non-Negativity constraints on Entity representations, i.e., optimization Eq.",
"(8) with µ = 0; and (ii) ComplEx-NNE+AER that further imposes the Approximate Entailment constraints over Relation representations besides those non-negativity ones, i.e., optimization Eq.",
"(8) with µ > 0.",
"Implementation Details: We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K.",
"We directly report their original results on these two datasets to avoid re-implementation bias.",
"On DB100K, the newly created dataset, we take the first two groups of baselines, i.e., those simple embedding models and ComplEx extensions with logical background knowledge incorporated.",
"We do not use the third group of baselines due to efficiency and complexity issues.",
"We use the code provided by Trouillon et al.",
"(2016) 7 for TransE, DistMult, and ComplEx, and the code released by their authors for ANAL-OGY 8 and RUGE 9 .",
"We re-implement HolE and ComplEx R so that all the baselines (as well as our approach) share the same optimization mode, i.e., SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.",
"10 We follow Trouillon et al.",
"(2016) to adopt a ranking loss for TransE and a logistic loss for all the other methods.",
"(Trouillon et al., 2016) .",
"Results for the other baselines are taken from the original papers.",
"Missing scores not reported in the literature are indicated by \"-\".",
"Best scores are highlighted in bold, and \" * \" indicates statistically significant improvements over ComplEx.",
"Table 4 : Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by \" * \".",
"Among those baselines, RUGE and ComplEx R require additional logical background knowledge.",
"RUGE makes use of soft rules, which are extracted by AMIE+ from the training sets.",
"As suggested by Guo et al.",
"(2018) , length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized.",
"Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8.",
"But it only considers entailments between a pair of relations, i.e., length-1 rules.",
"ComplEx R takes into account equivalence and inversion between relations.",
"We derive such axioms directly from our approximate entailments.",
"If r p λ 1 − → r q and r q λ 2 − → r p with λ 1 , λ 2 > 0.8, we think relations r p and r q are equivalent.",
"And similarly, if r −1 p λ 1 − → r q and r −1 q λ 2 − → r p with λ 1 , λ 2 > 0.8, we consider r p as an inverse of r q .",
"For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set.",
"Specifically, we tune the embedding size d ∈ {100, 150, 200}, the L 2 regularization coefficient η ∈ {0.001, 0.003, 0.01, 0.03, 0.1}, the ratio of negative over positive training examples α ∈ {2, 10}, and the initial learning rate γ ∈ {0.01, 0.05, 0.1, 0.5, 1.0}.",
"For TransE, we tune the margin of the ranking loss δ ∈ {0.1, 0.2, 0.5, 1, 2, 5, 10}.",
"Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors (Liu et al., 2017; Guo et al., 2018) .",
"After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER (µ in Eq.",
"(8)) in the range of {10 −5 , 10 −4 , · · · , 10 4 , 10 5 }, with all its other hyperparameters fixed to their optimal configurations.",
"We then directly set µ = 0 to get the optimal ComplEx-NNE model.",
"The weight of soft constraints in ComplEx R is tuned in the same range as µ.",
"The optimal configurations for our approach are: d = 200, η = 0.03, α = 10, γ = 1.0, µ = 10 on WN18; d = 200, η = 0.01, α = 10, γ = 0.5, µ = 10 −3 on FB15K; and d = 150, η = 0.03, α = 10, γ = 0.1, µ = 10 −5 on DB100K.",
"Experimental Results: Table 3 presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from previous literature.",
"Table 4 further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting.",
"On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test.",
"The reciprocal rank or HITS@N value with n = 1, 3, 10 for each test triple is used as paired data.",
"The symbol \" * \" indicates a significance level of p < 0.05.",
"The results demonstrate that imposing the nonnegativity and approximate entailment constraints indeed improves KG embedding.",
"ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18).",
"More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplEx R , and even the complicated developments or implementations like ConvE or Single DistMult.",
"This demonstrates the superiority of our approach.",
"Analysis on Entity Representations This section inspects how the structure of the entity embedding space changes when the constraints are imposed.",
"We first provide the visualization of entity representations on DB100K.",
"On this dataset each entity is associated with a single type label.",
"11 We pick 4 types reptile, wine region, species, and programming language, and randomly select 30 entities from each type.",
"Figure 1 visualizes with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated.",
"Then we investigate the semantic purity of these dimensions.",
"Specifically, we collect the representations of all the entities on DB100K (real components only).",
"For each dimension of these representations, top K percent of entities with the highest activation values on this dimension are picked.",
"We can calculate the entropy of the type distribution of the entities selected.",
"This entropy reflects diversity of entity types, or in other words, semantic purity.",
"If all the K percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity).",
"On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity).",
"Figure 2 AER, as K varies.",
"We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity.",
"We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena.",
"The results are given as supplementary material.",
"Analysis on Relation Representations This section further provides a visual inspection of the relation embedding space when the constraints are imposed.",
"To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.",
"12 We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure 3 , where for each relation we randomly pick 5 dimensions from both its real and imaginary components.",
"By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well.",
"Pairs of relations from the first class (equivalence) tend to have identical representations r p ≈ r q , those from the second class (inversion) complex conjugate representations r p ≈r q ; and the others representations that Re(r p ) ≤ Re(r q ) and Im(r p ) ≈ Im(r q ).",
"12 Equivalence and inversion are detected using heuristics introduced in § 4.2 (implementation details).",
"See the supplementary material for detailed properties of these three classes.",
"Conclusion This paper investigates the potential of using very simple constraints to improve KG embedding.",
"Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations.",
"Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity.",
"Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines.",
"The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space.",
"Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek.",
"2014"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Our Approach",
"A Basic Embedding Model",
"Non-negativity of Entity Representations",
"Approximate Entailment for Relations",
"The Overall Model",
"Experiments and Results",
"Datasets",
"Link Prediction",
"Analysis on Entity Representations",
"Analysis on Relation Representations",
"Conclusion"
]
} | GEM-SciDuet-train-121#paper-1331#slide-3 | Knowledge graph embedding cont | Easy computation and inference on knowledge graphs
Is Spain more similar to Camas (a municipality located in Spain) or Portugal
(both Portugal and Spain are European countries)?
Spain Camas Spain Portugal
What is the relationship between Cristiano Ronaldo and Portugal?
C. Ronaldo Portugal teammates | Easy computation and inference on knowledge graphs
Is Spain more similar to Camas (a municipality located in Spain) or Portugal
(both Portugal and Spain are European countries)?
Spain Camas Spain Portugal
What is the relationship between Cristiano Ronaldo and Portugal?
C. Ronaldo Portugal teammates | [] |
GEM-SciDuet-train-121#paper-1331#slide-4 | 1331 | Improving Knowledge Graph Embedding Using Simple Constraints | Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239
],
"paper_content_text": [
"Introduction The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), e.g., Freebase (Bollacker et al., 2008) , DBpedia (Lehmann et al., 2015) , and Google's Knowledge * Corresponding author: Quan Wang.",
"Vault (Dong et al., 2014) .",
"A typical KG is a multirelational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form (head entity, relation, tail entity).",
"Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks (Wasserman-Pritsker et al., 2015; Hoffmann et al., 2011; Yang and Mitchell, 2017) .",
"Recently, the concept of knowledge graph embedding has been presented and quickly become a hot research topic.",
"The key idea there is to embed components of a KG (i.e., entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG.",
"Early works on this topic learned such vectorial representations (i.e., embeddings) via just simple models developed over KG triples (Bordes et al., 2011 (Bordes et al., , 2013 Jenatton et al., 2012; Nickel et al., 2011) .",
"Recent attempts focused on either designing more complicated triple scoring models (Socher et al., 2013; Bordes et al., 2014; Wang et al., 2014; Lin et al., 2015b; Xiao et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , or incorporating extra information beyond KG triples (Chang et al., 2014; Zhong et al., 2015; Lin et al., 2015a; Neelakantan et al., 2015; Luo et al., 2015b; Xie et al., 2016a,b; Xiao et al., 2017) .",
"See (Wang et al., 2017) for a thorough review.",
"This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task.",
"Specifically, we examine two types of constraints: (i) non-negativity constraints on entity representations and (ii) approximate entailment constraints over relation representations.",
"By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability (Murphy et al., 2012) .",
"By using the latter, we further encode regularities of logical entailment between relations into their distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction (Rocktäschel et al., 2015; Guo et al., 2016) .",
"These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity.",
"Our work has some similarities to those which integrate logical background knowledge into KG embedding (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"Most of such works, however, need grounding of first-order logic rules.",
"The grounding process could be time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"(2016) tried to model rules using only relation representations.",
"But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities.",
"Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create.",
"Minervini et al.",
"(2017b) proposed adversarial training which can integrate first-order logic rules without grounding.",
"But their work, again, focuses on strict, hard rules.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of rules.",
"But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG.",
"Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, i.e., non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities.",
"We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well.",
"Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability.",
"The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space.",
"The remainder of this paper is organized as follows.",
"We first review related work in Section 2, and then detail our approach in Section 3.",
"Experiments and results are reported in Section 4, followed by concluding remarks in Section 5.",
"Related Work Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a.",
"KG embedding.",
"Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, e.g., TransE which takes relations as translating operations between head and tail entities (Bordes et al., 2013) , and RESCAL which models triples through bilinear operations over entity and relation representations (Nickel et al., 2011) .",
"Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, e.g., the TransE extensions (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015) , the RESCAL extensions (Yang et al., 2015; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , and the (deep) neural network models (Socher et al., 2013; Bordes et al., 2014; Shi and Weninger, 2017; Schlichtkrull et al., 2017; Dettmers et al., 2018) ; (ii) those which tried to integrate extra information beyond triples, e.g., entity types Xie et al., 2016b) , relation paths (Neelakantan et al., 2015; Lin et al., 2015a) , and textual descriptions (Xie et al., 2016a; Xiao et al., 2017) .",
"Please refer to (Nickel et al., 2016a; Wang et al., 2017) for a thorough review of these techniques.",
"In this paper, we show the potential of using very simple constraints (i.e., nonnegativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity.",
"A line of research related to ours is KG embedding with logical background knowledge incorporated (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"Both works, however, can only handle strict, hard rules which usually require extensive effort to create.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of background knowledge.",
"But their work considers only equivalence and inversion between relations, which might not always be available in a given KG.",
"Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding.",
"And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs.",
"Non-negativity has long been a subject studied in various research fields.",
"Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability (Lee and Seung, 1999) .",
"In many NLP-related tasks, nonnegativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition (Murphy et al., 2012; Luo et al., 2015a; Fyshe et al., 2015) .",
"In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability.",
"Our Approach This section presents our approach.",
"We first introduce a basic embedding technique to model triples in a given KG ( § 3.1).",
"Then we discuss the nonnegativity constraints over entity representations ( § 3.2) and the approximate entailment constraints over relation representations ( § 3.3) .",
"And finally we present the overall model ( § 3.4).",
"A Basic Embedding Model We choose ComplEx (Trouillon et al., 2016) as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance.",
"Specifically, suppose we are given a KG containing a set of triples O = {(e i , r k , e j )}, with each triple composed of two entities e i , e j ∈ E and their relation r k ∈ R. Here E is the set of entities and R the set of relations.",
"ComplEx then represents each entity e ∈ E as a complex-valued vector e ∈ C d , and each relation r ∈ R a complex-valued vector r ∈ C d , where d is the dimensionality of the embedding space.",
"Each x ∈ C d consists of a real vector component Re(x) and an imaginary vector component Im(x), i.e., x = Re(x) + iIm(x).",
"For any given triple (e i , r k , e j ) ∈ E × R × E, a multilinear dot product is used to score that triple, i.e., φ(e i , r k , e j ) Re( e i , r k ,ē j ) Re( [e i ] [r k ] [ē j ] ), (1) where e i , r k , e j ∈ C d are the vectorial representations associated with e i , r k , e j , respectively;ē j is the conjugate of e j ; [·] is the -th entry of a vector; and Re(·) means taking the real part of a complex value.",
"Triples with higher φ(·, ·, ·) scores are more likely to be true.",
"Owing to the asymmetry of this scoring function, i.e., φ(e i , r k , e j ) = φ(e j , r k , e i ), ComplEx can effectively handle asymmetric relations (Trouillon et al., 2016) .",
"Non-negativity of Entity Representations On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations.",
"In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions.",
"In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation.",
"However, as pointed out by Murphy et al.",
"(2012) , it would be uneconomical to store all negative properties of an entity or a concept.",
"For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication.",
"Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations.",
"To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of [0, 1] d , as approximately Boolean embeddings (Kruszewski et al., 2015) , i.e., 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (2) where e ∈ C d is the representation for entity e ∈ E, with its real and imaginary components denoted by Re(e), Im(e) ∈ R d ; 0 and 1 are d-dimensional vectors with all their entries being 0 or 1; and ≥, ≤ , = denote the entry-wise comparisons throughout the paper whenever applicable.",
"As shown by Lee and Seung (1999) , non-negativity, in most cases, will further induce sparsity and interpretability.",
"Approximate Entailment for Relations Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations.",
"By approximate entailment, we mean an ordered pair of relations that the former approximately entails the latter, e.g., BornInCountry and Nationality, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country.",
"Each such relation pair is associated with a weight to indicate the confidence level of entailment.",
"A larger weight stands for a higher level of confidence.",
"We denote by r p λ − → r q the approximate entailment between relations r p and r q , with confidence level λ.",
"This kind of entailment can be derived automatically from a KG by modern rule mining systems (Galárraga et al., 2015) .",
"Let T denote the set of all such approximate entailments derived beforehand.",
"Before diving into approximate entailment, we first explore the modeling of strict entailment, i.e., entailment with infinite confidence level λ = +∞.",
"The strict entailment r p → r q states that if relation r p holds then relation r q must also hold.",
"This entailment can be roughly modelled by requiring φ(e i , r p , e j ) ≤ φ(e i , r q , e j ), ∀e i , e j ∈ E, (3) where φ(·, ·, ·) is the score for a triple predicted by the embedding model, defined by Eq.",
"(1).",
"Eq.",
"(3) can be interpreted as follows: for any two entities e i and e j , if (e i , r p , e j ) is a true fact with a high score φ(e i , r p , e j ), then the triple (e i , r q , e j ) with an even higher score should also be predicted as a true fact by the embedding model.",
"Note that given the non-negativity constraints defined by Eq.",
"(2), a sufficient condition for Eq.",
"(3) to hold, is to further impose Re(r p ) ≤ Re(r q ), Im(r p ) = Im(r q ), (4) where r p and r q are the complex-valued representations for r p and r q respectively, with the real and imaginary components denoted by Re(·), Im(·) ∈ R d .",
"That means, when the constraints of Eq.",
"(4) (along with those of Eq.",
"(2)) are satisfied, the requirement of Eq.",
"(3) (or in other words r p → r q ) will always hold.",
"We provide a proof of sufficiency as supplementary material.",
"Next we examine the modeling of approximate entailment.",
"To this end, we further introduce the confidence level λ and allow slackness in Eq.",
"(4), which yields λ Re(r p ) − Re(r q ) ≤ α, (5) λ Im(r p ) − Im(r q ) 2 ≤ β.",
"(6) Here α, β ≥ 0 are slack variables, and (·) 2 means an entry-wise operation.",
"Entailments with higher confidence levels show less tolerance for violating the constraints.",
"When λ = +∞, Eqs.",
"(5) -(6) degenerate to Eq.",
"(4).",
"The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible (e i , e j ) entity pairs (i.e., grounding).",
"In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty.",
"The Overall Model Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations.",
"The overall model is presented as follows: min Θ,{α,β} D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T 1 (α + β) + η Θ 2 2 , s.t.",
"λ Re(r p ) − Re(r q ) ≤ α, λ Im(r p ) − Im(r q ) 2 ≤ β, α, β ≥ 0, ∀r p λ − → r q ∈ T , 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E. (7) Here, Θ {e : e ∈ E} ∪ {r : r ∈ R} is the set of all entity and relation representations; D + and D − are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, i.e., (e i , r k , e j ) ∈ O; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, i.e., (e i , r k , e j ) or (e i , r k , e j ); y ijk = ±1 is the label (positive or negative) of triple (e i , r k , e j ).",
"In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels.",
"The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient µ ≥ 0.",
"The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied.",
"The last term is L 2 regularization to avoid over-fitting, and η ≥ 0 is the regularization coefficient.",
"To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are.",
"As such, the optimization problem of Eq.",
"(7) can be rewritten as: min Θ D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T λ1 Re(r p )−Re(r q ) + + µ T λ1 Im(r p )−Im(r q ) 2 + η Θ 2 2 , s.t.",
"0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (8) where [x] + = max(0, x) with max(·, ·) being an entry-wise operation.",
"The equivalence between Eq.",
"(7) and Eq.",
"(8) is shown in the supplementary material.",
"We use SGD in mini-batch mode as our optimizer, with AdaGrad (Duchi et al., 2011) to tune the learning rate.",
"After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of [0, 1] d , to satisfy the non-negativity constraints.",
"While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity.",
"Our approach has a space complexity of O(nd + md), which is the same as that of ComplEx.",
"Here, n is the number of entities, m the number of relations, and O(nd + md) to store a d-dimensional complex-valued vector for each entity and each relation.",
"The time complexity (per iteration) of our approach is O(sd+td+nd), where s is the average number of triples in a mini-batch,n the average number of entities in a mini-batch, and t the total number of approximate entailments in T .",
"O(sd) is to handle triples in a mini-batch, O(td) penalty terms introduced by the approximate entailments, and O(nd) further the non-negativity constraints on entity representations.",
"Usually there are much fewer entailments than triples, i.e., t s, and alsō n ≤ 2s.",
"1 So the time complexity of our approach is on a par with O(sd), i.e., the time complexity of ComplEx.",
"Experiments and Results This section presents our experiments and results.",
"We first introduce the datasets used in our experiments ( § 4.1).",
"Then we empirically evaluate our approach in the link prediction task ( § 4.2).",
"After that, we conduct extensive analysis on both entity representations ( § 4.3) and relation representations ( § 4.4) to show the interpretability of our model.",
"1 There will be at most 2s entities contained in s triples.",
"Code and data used in the experiments are available at https://github.com/iieir-km/ ComplEx-NNE_AER.",
"Datasets The first two datasets we used are WN18 and F-B15K, released by Bordes et al.",
"(2013) .",
"2 WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities.",
"We create our third dataset from the mapping-based objects of core DBpedia.",
"3 We eliminate relations not included within the DBpedia ontology such as HomePage and Logo, and discard entities appearing less than 20 times.",
"The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities.",
"Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively.",
"We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K.",
"We further use AMIE+ (Galárraga et al., 2015) 4 to extract approximate entailments automatically from the training set of each dataset.",
"As suggested by Guo et al.",
"(2018) , we consider entailments with PCA confidence higher than 0.8.",
"5 As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K.",
"Table 1 gives some examples of these approximate entailments, along with their confidence levels.",
"Table 2 further summarizes the statistics of the datasets.",
"Link Prediction We first evaluate our approach in the link prediction task, which aims to predict a triple (e i , r k , e j ) with e i or e j missing, i.e., predict e i given (r k , e j ) or predict e j given (e i , r k ).",
"Evaluation Protocol: We follow the protocol introduced by Bordes et al.",
"(2013) .",
"For each test triple (e i , r k , e j ), we replace its head entity e i with every entity e i ∈ E, and calculate a score for the corrupted triple (e i , r k , e j ), e.g., φ(e i , r k , e j ) defined by Eq.",
"(1).",
"Then we sort these scores in de- scending order, and get the rank of the correct entity e i .",
"During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, i.e., the filtered setting as described in (Bordes et al., 2013) .",
"This whole procedure is repeated while replacing the tail entity e j .",
"We report on the test set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (HITS@N), with n = 1, 3, 10.",
"Comparison Settings: We compare the performance of our approach against a variety of KG embedding models developed in recent years.",
"These models can be categorized into three groups: • Simple embedding models that utilize triples alone without integrating extra information, including TransE (Bordes et al., 2013) , Dist-Mult (Yang et al., 2015) , HolE (Nickel et al., 2016b) , ComplEx (Trouillon et al., 2016) , and ANALOGY (Liu et al., 2017) .",
"Our approach is developed on the basis of ComplEx.",
"• Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE (Guo et al., 2018) and ComplEx R (Minervini et al., 2017a) .",
"The former requires grounding of first-order logic rules.",
"The latter is restricted to relation equiv-alence and inversion, and assigns an identical confidence level to all different rules.",
"• Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and F-B15K, including R-GCN (Schlichtkrull et al., 2017) , ConvE (Dettmers et al., 2018) , and Single DistMult (Kadlec et al., 2017) .",
"6 The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models.",
"The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time.",
"We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the Non-Negativity constraints on Entity representations, i.e., optimization Eq.",
"(8) with µ = 0; and (ii) ComplEx-NNE+AER that further imposes the Approximate Entailment constraints over Relation representations besides those non-negativity ones, i.e., optimization Eq.",
"(8) with µ > 0.",
"Implementation Details: We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K.",
"We directly report their original results on these two datasets to avoid re-implementation bias.",
"On DB100K, the newly created dataset, we take the first two groups of baselines, i.e., those simple embedding models and ComplEx extensions with logical background knowledge incorporated.",
"We do not use the third group of baselines due to efficiency and complexity issues.",
"We use the code provided by Trouillon et al.",
"(2016) 7 for TransE, DistMult, and ComplEx, and the code released by their authors for ANAL-OGY 8 and RUGE 9 .",
"We re-implement HolE and ComplEx R so that all the baselines (as well as our approach) share the same optimization mode, i.e., SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.",
"10 We follow Trouillon et al.",
"(2016) to adopt a ranking loss for TransE and a logistic loss for all the other methods.",
"(Trouillon et al., 2016) .",
"Results for the other baselines are taken from the original papers.",
"Missing scores not reported in the literature are indicated by \"-\".",
"Best scores are highlighted in bold, and \" * \" indicates statistically significant improvements over ComplEx.",
"Table 4 : Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by \" * \".",
"Among those baselines, RUGE and ComplEx R require additional logical background knowledge.",
"RUGE makes use of soft rules, which are extracted by AMIE+ from the training sets.",
"As suggested by Guo et al.",
"(2018) , length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized.",
"Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8.",
"But it only considers entailments between a pair of relations, i.e., length-1 rules.",
"ComplEx R takes into account equivalence and inversion between relations.",
"We derive such axioms directly from our approximate entailments.",
"If r p λ 1 − → r q and r q λ 2 − → r p with λ 1 , λ 2 > 0.8, we think relations r p and r q are equivalent.",
"And similarly, if r −1 p λ 1 − → r q and r −1 q λ 2 − → r p with λ 1 , λ 2 > 0.8, we consider r p as an inverse of r q .",
"For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set.",
"Specifically, we tune the embedding size d ∈ {100, 150, 200}, the L 2 regularization coefficient η ∈ {0.001, 0.003, 0.01, 0.03, 0.1}, the ratio of negative over positive training examples α ∈ {2, 10}, and the initial learning rate γ ∈ {0.01, 0.05, 0.1, 0.5, 1.0}.",
"For TransE, we tune the margin of the ranking loss δ ∈ {0.1, 0.2, 0.5, 1, 2, 5, 10}.",
"Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors (Liu et al., 2017; Guo et al., 2018) .",
"After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER (µ in Eq.",
"(8)) in the range of {10 −5 , 10 −4 , · · · , 10 4 , 10 5 }, with all its other hyperparameters fixed to their optimal configurations.",
"We then directly set µ = 0 to get the optimal ComplEx-NNE model.",
"The weight of soft constraints in ComplEx R is tuned in the same range as µ.",
"The optimal configurations for our approach are: d = 200, η = 0.03, α = 10, γ = 1.0, µ = 10 on WN18; d = 200, η = 0.01, α = 10, γ = 0.5, µ = 10 −3 on FB15K; and d = 150, η = 0.03, α = 10, γ = 0.1, µ = 10 −5 on DB100K.",
"Experimental Results: Table 3 presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from previous literature.",
"Table 4 further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting.",
"On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test.",
"The reciprocal rank or HITS@N value with n = 1, 3, 10 for each test triple is used as paired data.",
"The symbol \" * \" indicates a significance level of p < 0.05.",
"The results demonstrate that imposing the nonnegativity and approximate entailment constraints indeed improves KG embedding.",
"ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18).",
"More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplEx R , and even the complicated developments or implementations like ConvE or Single DistMult.",
"This demonstrates the superiority of our approach.",
"Analysis on Entity Representations This section inspects how the structure of the entity embedding space changes when the constraints are imposed.",
"We first provide the visualization of entity representations on DB100K.",
"On this dataset each entity is associated with a single type label.",
"11 We pick 4 types reptile, wine region, species, and programming language, and randomly select 30 entities from each type.",
"Figure 1 visualizes with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated.",
"Then we investigate the semantic purity of these dimensions.",
"Specifically, we collect the representations of all the entities on DB100K (real components only).",
"For each dimension of these representations, top K percent of entities with the highest activation values on this dimension are picked.",
"We can calculate the entropy of the type distribution of the entities selected.",
"This entropy reflects diversity of entity types, or in other words, semantic purity.",
"If all the K percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity).",
"On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity).",
"Figure 2 AER, as K varies.",
"We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity.",
"We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena.",
"The results are given as supplementary material.",
"Analysis on Relation Representations This section further provides a visual inspection of the relation embedding space when the constraints are imposed.",
"To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.",
"12 We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure 3 , where for each relation we randomly pick 5 dimensions from both its real and imaginary components.",
"By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well.",
"Pairs of relations from the first class (equivalence) tend to have identical representations r p ≈ r q , those from the second class (inversion) complex conjugate representations r p ≈r q ; and the others representations that Re(r p ) ≤ Re(r q ) and Im(r p ) ≈ Im(r q ).",
"12 Equivalence and inversion are detected using heuristics introduced in § 4.2 (implementation details).",
"See the supplementary material for detailed properties of these three classes.",
"Conclusion This paper investigates the potential of using very simple constraints to improve KG embedding.",
"Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations.",
"Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity.",
"Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines.",
"The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space.",
"Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek.",
"2014"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Our Approach",
"A Basic Embedding Model",
"Non-negativity of Entity Representations",
"Approximate Entailment for Relations",
"The Overall Model",
"Experiments and Results",
"Datasets",
"Link Prediction",
"Analysis on Entity Representations",
"Analysis on Relation Representations",
"Conclusion"
]
} | GEM-SciDuet-train-121#paper-1331#slide-4 | Previous approaches | Simple models developed over RDF triples, e.g., TransE, RESCAL,
Designing more complicated triple scoring models
Usually with higher computational complexity
Incorporating extra information beyond RDF triples
Not always applicable to all knowledge graphs | Simple models developed over RDF triples, e.g., TransE, RESCAL,
Designing more complicated triple scoring models
Usually with higher computational complexity
Incorporating extra information beyond RDF triples
Not always applicable to all knowledge graphs | [] |
GEM-SciDuet-train-121#paper-1331#slide-5 | 1331 | Improving Knowledge Graph Embedding Using Simple Constraints | Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239
],
"paper_content_text": [
"Introduction The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), e.g., Freebase (Bollacker et al., 2008) , DBpedia (Lehmann et al., 2015) , and Google's Knowledge * Corresponding author: Quan Wang.",
"Vault (Dong et al., 2014) .",
"A typical KG is a multirelational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form (head entity, relation, tail entity).",
"Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks (Wasserman-Pritsker et al., 2015; Hoffmann et al., 2011; Yang and Mitchell, 2017) .",
"Recently, the concept of knowledge graph embedding has been presented and quickly become a hot research topic.",
"The key idea there is to embed components of a KG (i.e., entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG.",
"Early works on this topic learned such vectorial representations (i.e., embeddings) via just simple models developed over KG triples (Bordes et al., 2011 (Bordes et al., , 2013 Jenatton et al., 2012; Nickel et al., 2011) .",
"Recent attempts focused on either designing more complicated triple scoring models (Socher et al., 2013; Bordes et al., 2014; Wang et al., 2014; Lin et al., 2015b; Xiao et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , or incorporating extra information beyond KG triples (Chang et al., 2014; Zhong et al., 2015; Lin et al., 2015a; Neelakantan et al., 2015; Luo et al., 2015b; Xie et al., 2016a,b; Xiao et al., 2017) .",
"See (Wang et al., 2017) for a thorough review.",
"This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task.",
"Specifically, we examine two types of constraints: (i) non-negativity constraints on entity representations and (ii) approximate entailment constraints over relation representations.",
"By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability (Murphy et al., 2012) .",
"By using the latter, we further encode regularities of logical entailment between relations into their distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction (Rocktäschel et al., 2015; Guo et al., 2016) .",
"These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity.",
"Our work has some similarities to those which integrate logical background knowledge into KG embedding (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"Most of such works, however, need grounding of first-order logic rules.",
"The grounding process could be time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"(2016) tried to model rules using only relation representations.",
"But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities.",
"Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create.",
"Minervini et al.",
"(2017b) proposed adversarial training which can integrate first-order logic rules without grounding.",
"But their work, again, focuses on strict, hard rules.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of rules.",
"But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG.",
"Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, i.e., non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities.",
"We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well.",
"Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability.",
"The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space.",
"The remainder of this paper is organized as follows.",
"We first review related work in Section 2, and then detail our approach in Section 3.",
"Experiments and results are reported in Section 4, followed by concluding remarks in Section 5.",
"Related Work Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a.",
"KG embedding.",
"Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, e.g., TransE which takes relations as translating operations between head and tail entities (Bordes et al., 2013) , and RESCAL which models triples through bilinear operations over entity and relation representations (Nickel et al., 2011) .",
"Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, e.g., the TransE extensions (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015) , the RESCAL extensions (Yang et al., 2015; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , and the (deep) neural network models (Socher et al., 2013; Bordes et al., 2014; Shi and Weninger, 2017; Schlichtkrull et al., 2017; Dettmers et al., 2018) ; (ii) those which tried to integrate extra information beyond triples, e.g., entity types Xie et al., 2016b) , relation paths (Neelakantan et al., 2015; Lin et al., 2015a) , and textual descriptions (Xie et al., 2016a; Xiao et al., 2017) .",
"Please refer to (Nickel et al., 2016a; Wang et al., 2017) for a thorough review of these techniques.",
"In this paper, we show the potential of using very simple constraints (i.e., nonnegativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity.",
"A line of research related to ours is KG embedding with logical background knowledge incorporated (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"Both works, however, can only handle strict, hard rules which usually require extensive effort to create.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of background knowledge.",
"But their work considers only equivalence and inversion between relations, which might not always be available in a given KG.",
"Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding.",
"And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs.",
"Non-negativity has long been a subject studied in various research fields.",
"Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability (Lee and Seung, 1999) .",
"In many NLP-related tasks, nonnegativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition (Murphy et al., 2012; Luo et al., 2015a; Fyshe et al., 2015) .",
"In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability.",
"Our Approach This section presents our approach.",
"We first introduce a basic embedding technique to model triples in a given KG ( § 3.1).",
"Then we discuss the nonnegativity constraints over entity representations ( § 3.2) and the approximate entailment constraints over relation representations ( § 3.3) .",
"And finally we present the overall model ( § 3.4).",
"A Basic Embedding Model We choose ComplEx (Trouillon et al., 2016) as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance.",
"Specifically, suppose we are given a KG containing a set of triples O = {(e i , r k , e j )}, with each triple composed of two entities e i , e j ∈ E and their relation r k ∈ R. Here E is the set of entities and R the set of relations.",
"ComplEx then represents each entity e ∈ E as a complex-valued vector e ∈ C d , and each relation r ∈ R a complex-valued vector r ∈ C d , where d is the dimensionality of the embedding space.",
"Each x ∈ C d consists of a real vector component Re(x) and an imaginary vector component Im(x), i.e., x = Re(x) + iIm(x).",
"For any given triple (e i , r k , e j ) ∈ E × R × E, a multilinear dot product is used to score that triple, i.e., φ(e i , r k , e j ) Re( e i , r k ,ē j ) Re( [e i ] [r k ] [ē j ] ), (1) where e i , r k , e j ∈ C d are the vectorial representations associated with e i , r k , e j , respectively;ē j is the conjugate of e j ; [·] is the -th entry of a vector; and Re(·) means taking the real part of a complex value.",
"Triples with higher φ(·, ·, ·) scores are more likely to be true.",
"Owing to the asymmetry of this scoring function, i.e., φ(e i , r k , e j ) = φ(e j , r k , e i ), ComplEx can effectively handle asymmetric relations (Trouillon et al., 2016) .",
"Non-negativity of Entity Representations On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations.",
"In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions.",
"In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation.",
"However, as pointed out by Murphy et al.",
"(2012) , it would be uneconomical to store all negative properties of an entity or a concept.",
"For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication.",
"Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations.",
"To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of [0, 1] d , as approximately Boolean embeddings (Kruszewski et al., 2015) , i.e., 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (2) where e ∈ C d is the representation for entity e ∈ E, with its real and imaginary components denoted by Re(e), Im(e) ∈ R d ; 0 and 1 are d-dimensional vectors with all their entries being 0 or 1; and ≥, ≤ , = denote the entry-wise comparisons throughout the paper whenever applicable.",
"As shown by Lee and Seung (1999) , non-negativity, in most cases, will further induce sparsity and interpretability.",
"Approximate Entailment for Relations Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations.",
"By approximate entailment, we mean an ordered pair of relations that the former approximately entails the latter, e.g., BornInCountry and Nationality, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country.",
"Each such relation pair is associated with a weight to indicate the confidence level of entailment.",
"A larger weight stands for a higher level of confidence.",
"We denote by r p λ − → r q the approximate entailment between relations r p and r q , with confidence level λ.",
"This kind of entailment can be derived automatically from a KG by modern rule mining systems (Galárraga et al., 2015) .",
"Let T denote the set of all such approximate entailments derived beforehand.",
"Before diving into approximate entailment, we first explore the modeling of strict entailment, i.e., entailment with infinite confidence level λ = +∞.",
"The strict entailment r p → r q states that if relation r p holds then relation r q must also hold.",
"This entailment can be roughly modelled by requiring φ(e i , r p , e j ) ≤ φ(e i , r q , e j ), ∀e i , e j ∈ E, (3) where φ(·, ·, ·) is the score for a triple predicted by the embedding model, defined by Eq.",
"(1).",
"Eq.",
"(3) can be interpreted as follows: for any two entities e i and e j , if (e i , r p , e j ) is a true fact with a high score φ(e i , r p , e j ), then the triple (e i , r q , e j ) with an even higher score should also be predicted as a true fact by the embedding model.",
"Note that given the non-negativity constraints defined by Eq.",
"(2), a sufficient condition for Eq.",
"(3) to hold, is to further impose Re(r p ) ≤ Re(r q ), Im(r p ) = Im(r q ), (4) where r p and r q are the complex-valued representations for r p and r q respectively, with the real and imaginary components denoted by Re(·), Im(·) ∈ R d .",
"That means, when the constraints of Eq.",
"(4) (along with those of Eq.",
"(2)) are satisfied, the requirement of Eq.",
"(3) (or in other words r p → r q ) will always hold.",
"We provide a proof of sufficiency as supplementary material.",
"Next we examine the modeling of approximate entailment.",
"To this end, we further introduce the confidence level λ and allow slackness in Eq.",
"(4), which yields λ Re(r p ) − Re(r q ) ≤ α, (5) λ Im(r p ) − Im(r q ) 2 ≤ β.",
"(6) Here α, β ≥ 0 are slack variables, and (·) 2 means an entry-wise operation.",
"Entailments with higher confidence levels show less tolerance for violating the constraints.",
"When λ = +∞, Eqs.",
"(5) -(6) degenerate to Eq.",
"(4).",
"The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible (e i , e j ) entity pairs (i.e., grounding).",
"In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty.",
"The Overall Model Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations.",
"The overall model is presented as follows: min Θ,{α,β} D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T 1 (α + β) + η Θ 2 2 , s.t.",
"λ Re(r p ) − Re(r q ) ≤ α, λ Im(r p ) − Im(r q ) 2 ≤ β, α, β ≥ 0, ∀r p λ − → r q ∈ T , 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E. (7) Here, Θ {e : e ∈ E} ∪ {r : r ∈ R} is the set of all entity and relation representations; D + and D − are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, i.e., (e i , r k , e j ) ∈ O; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, i.e., (e i , r k , e j ) or (e i , r k , e j ); y ijk = ±1 is the label (positive or negative) of triple (e i , r k , e j ).",
"In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels.",
"The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient µ ≥ 0.",
"The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied.",
"The last term is L 2 regularization to avoid over-fitting, and η ≥ 0 is the regularization coefficient.",
"To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are.",
"As such, the optimization problem of Eq.",
"(7) can be rewritten as: min Θ D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T λ1 Re(r p )−Re(r q ) + + µ T λ1 Im(r p )−Im(r q ) 2 + η Θ 2 2 , s.t.",
"0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (8) where [x] + = max(0, x) with max(·, ·) being an entry-wise operation.",
"The equivalence between Eq.",
"(7) and Eq.",
"(8) is shown in the supplementary material.",
"We use SGD in mini-batch mode as our optimizer, with AdaGrad (Duchi et al., 2011) to tune the learning rate.",
"After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of [0, 1] d , to satisfy the non-negativity constraints.",
"While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity.",
"Our approach has a space complexity of O(nd + md), which is the same as that of ComplEx.",
"Here, n is the number of entities, m the number of relations, and O(nd + md) to store a d-dimensional complex-valued vector for each entity and each relation.",
"The time complexity (per iteration) of our approach is O(sd+td+nd), where s is the average number of triples in a mini-batch,n the average number of entities in a mini-batch, and t the total number of approximate entailments in T .",
"O(sd) is to handle triples in a mini-batch, O(td) penalty terms introduced by the approximate entailments, and O(nd) further the non-negativity constraints on entity representations.",
"Usually there are much fewer entailments than triples, i.e., t s, and alsō n ≤ 2s.",
"1 So the time complexity of our approach is on a par with O(sd), i.e., the time complexity of ComplEx.",
"Experiments and Results This section presents our experiments and results.",
"We first introduce the datasets used in our experiments ( § 4.1).",
"Then we empirically evaluate our approach in the link prediction task ( § 4.2).",
"After that, we conduct extensive analysis on both entity representations ( § 4.3) and relation representations ( § 4.4) to show the interpretability of our model.",
"1 There will be at most 2s entities contained in s triples.",
"Code and data used in the experiments are available at https://github.com/iieir-km/ ComplEx-NNE_AER.",
"Datasets The first two datasets we used are WN18 and F-B15K, released by Bordes et al.",
"(2013) .",
"2 WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities.",
"We create our third dataset from the mapping-based objects of core DBpedia.",
"3 We eliminate relations not included within the DBpedia ontology such as HomePage and Logo, and discard entities appearing less than 20 times.",
"The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities.",
"Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively.",
"We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K.",
"We further use AMIE+ (Galárraga et al., 2015) 4 to extract approximate entailments automatically from the training set of each dataset.",
"As suggested by Guo et al.",
"(2018) , we consider entailments with PCA confidence higher than 0.8.",
"5 As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K.",
"Table 1 gives some examples of these approximate entailments, along with their confidence levels.",
"Table 2 further summarizes the statistics of the datasets.",
"Link Prediction We first evaluate our approach in the link prediction task, which aims to predict a triple (e i , r k , e j ) with e i or e j missing, i.e., predict e i given (r k , e j ) or predict e j given (e i , r k ).",
"Evaluation Protocol: We follow the protocol introduced by Bordes et al.",
"(2013) .",
"For each test triple (e i , r k , e j ), we replace its head entity e i with every entity e i ∈ E, and calculate a score for the corrupted triple (e i , r k , e j ), e.g., φ(e i , r k , e j ) defined by Eq.",
"(1).",
"Then we sort these scores in de- scending order, and get the rank of the correct entity e i .",
"During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, i.e., the filtered setting as described in (Bordes et al., 2013) .",
"This whole procedure is repeated while replacing the tail entity e j .",
"We report on the test set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (HITS@N), with n = 1, 3, 10.",
"Comparison Settings: We compare the performance of our approach against a variety of KG embedding models developed in recent years.",
"These models can be categorized into three groups: • Simple embedding models that utilize triples alone without integrating extra information, including TransE (Bordes et al., 2013) , Dist-Mult (Yang et al., 2015) , HolE (Nickel et al., 2016b) , ComplEx (Trouillon et al., 2016) , and ANALOGY (Liu et al., 2017) .",
"Our approach is developed on the basis of ComplEx.",
"• Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE (Guo et al., 2018) and ComplEx R (Minervini et al., 2017a) .",
"The former requires grounding of first-order logic rules.",
"The latter is restricted to relation equiv-alence and inversion, and assigns an identical confidence level to all different rules.",
"• Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and F-B15K, including R-GCN (Schlichtkrull et al., 2017) , ConvE (Dettmers et al., 2018) , and Single DistMult (Kadlec et al., 2017) .",
"6 The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models.",
"The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time.",
"We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the Non-Negativity constraints on Entity representations, i.e., optimization Eq.",
"(8) with µ = 0; and (ii) ComplEx-NNE+AER that further imposes the Approximate Entailment constraints over Relation representations besides those non-negativity ones, i.e., optimization Eq.",
"(8) with µ > 0.",
"Implementation Details: We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K.",
"We directly report their original results on these two datasets to avoid re-implementation bias.",
"On DB100K, the newly created dataset, we take the first two groups of baselines, i.e., those simple embedding models and ComplEx extensions with logical background knowledge incorporated.",
"We do not use the third group of baselines due to efficiency and complexity issues.",
"We use the code provided by Trouillon et al.",
"(2016) 7 for TransE, DistMult, and ComplEx, and the code released by their authors for ANAL-OGY 8 and RUGE 9 .",
"We re-implement HolE and ComplEx R so that all the baselines (as well as our approach) share the same optimization mode, i.e., SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.",
"10 We follow Trouillon et al.",
"(2016) to adopt a ranking loss for TransE and a logistic loss for all the other methods.",
"(Trouillon et al., 2016) .",
"Results for the other baselines are taken from the original papers.",
"Missing scores not reported in the literature are indicated by \"-\".",
"Best scores are highlighted in bold, and \" * \" indicates statistically significant improvements over ComplEx.",
"Table 4 : Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by \" * \".",
"Among those baselines, RUGE and ComplEx R require additional logical background knowledge.",
"RUGE makes use of soft rules, which are extracted by AMIE+ from the training sets.",
"As suggested by Guo et al.",
"(2018) , length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized.",
"Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8.",
"But it only considers entailments between a pair of relations, i.e., length-1 rules.",
"ComplEx R takes into account equivalence and inversion between relations.",
"We derive such axioms directly from our approximate entailments.",
"If r p λ 1 − → r q and r q λ 2 − → r p with λ 1 , λ 2 > 0.8, we think relations r p and r q are equivalent.",
"And similarly, if r −1 p λ 1 − → r q and r −1 q λ 2 − → r p with λ 1 , λ 2 > 0.8, we consider r p as an inverse of r q .",
"For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set.",
"Specifically, we tune the embedding size d ∈ {100, 150, 200}, the L 2 regularization coefficient η ∈ {0.001, 0.003, 0.01, 0.03, 0.1}, the ratio of negative over positive training examples α ∈ {2, 10}, and the initial learning rate γ ∈ {0.01, 0.05, 0.1, 0.5, 1.0}.",
"For TransE, we tune the margin of the ranking loss δ ∈ {0.1, 0.2, 0.5, 1, 2, 5, 10}.",
"Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors (Liu et al., 2017; Guo et al., 2018) .",
"After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER (µ in Eq.",
"(8)) in the range of {10 −5 , 10 −4 , · · · , 10 4 , 10 5 }, with all its other hyperparameters fixed to their optimal configurations.",
"We then directly set µ = 0 to get the optimal ComplEx-NNE model.",
"The weight of soft constraints in ComplEx R is tuned in the same range as µ.",
"The optimal configurations for our approach are: d = 200, η = 0.03, α = 10, γ = 1.0, µ = 10 on WN18; d = 200, η = 0.01, α = 10, γ = 0.5, µ = 10 −3 on FB15K; and d = 150, η = 0.03, α = 10, γ = 0.1, µ = 10 −5 on DB100K.",
"Experimental Results: Table 3 presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from previous literature.",
"Table 4 further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting.",
"On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test.",
"The reciprocal rank or HITS@N value with n = 1, 3, 10 for each test triple is used as paired data.",
"The symbol \" * \" indicates a significance level of p < 0.05.",
"The results demonstrate that imposing the nonnegativity and approximate entailment constraints indeed improves KG embedding.",
"ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18).",
"More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplEx R , and even the complicated developments or implementations like ConvE or Single DistMult.",
"This demonstrates the superiority of our approach.",
"Analysis on Entity Representations This section inspects how the structure of the entity embedding space changes when the constraints are imposed.",
"We first provide the visualization of entity representations on DB100K.",
"On this dataset each entity is associated with a single type label.",
"11 We pick 4 types reptile, wine region, species, and programming language, and randomly select 30 entities from each type.",
"Figure 1 visualizes with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated.",
"Then we investigate the semantic purity of these dimensions.",
"Specifically, we collect the representations of all the entities on DB100K (real components only).",
"For each dimension of these representations, top K percent of entities with the highest activation values on this dimension are picked.",
"We can calculate the entropy of the type distribution of the entities selected.",
"This entropy reflects diversity of entity types, or in other words, semantic purity.",
"If all the K percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity).",
"On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity).",
"Figure 2 AER, as K varies.",
"We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity.",
"We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena.",
"The results are given as supplementary material.",
"Analysis on Relation Representations This section further provides a visual inspection of the relation embedding space when the constraints are imposed.",
"To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.",
"12 We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure 3 , where for each relation we randomly pick 5 dimensions from both its real and imaginary components.",
"By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well.",
"Pairs of relations from the first class (equivalence) tend to have identical representations r p ≈ r q , those from the second class (inversion) complex conjugate representations r p ≈r q ; and the others representations that Re(r p ) ≤ Re(r q ) and Im(r p ) ≈ Im(r q ).",
"12 Equivalence and inversion are detected using heuristics introduced in § 4.2 (implementation details).",
"See the supplementary material for detailed properties of these three classes.",
"Conclusion This paper investigates the potential of using very simple constraints to improve KG embedding.",
"Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations.",
"Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity.",
"Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines.",
"The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space.",
"Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek.",
"2014"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Our Approach",
"A Basic Embedding Model",
"Non-negativity of Entity Representations",
"Approximate Entailment for Relations",
"The Overall Model",
"Experiments and Results",
"Datasets",
"Link Prediction",
"Analysis on Entity Representations",
"Analysis on Relation Representations",
"Conclusion"
]
} | GEM-SciDuet-train-121#paper-1331#slide-5 | This work | Using simple constraints to improve knowledge graph embedding
Non-negativity constraints on entity representations
Approximate entailment constraints on relation representations
Code and data available at https://github.com/iieir-km/ComplEx-NNE_AER | Using simple constraints to improve knowledge graph embedding
Non-negativity constraints on entity representations
Approximate entailment constraints on relation representations
Code and data available at https://github.com/iieir-km/ComplEx-NNE_AER | [] |
GEM-SciDuet-train-121#paper-1331#slide-6 | 1331 | Improving Knowledge Graph Embedding Using Simple Constraints | Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239
],
"paper_content_text": [
"Introduction The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), e.g., Freebase (Bollacker et al., 2008) , DBpedia (Lehmann et al., 2015) , and Google's Knowledge * Corresponding author: Quan Wang.",
"Vault (Dong et al., 2014) .",
"A typical KG is a multirelational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form (head entity, relation, tail entity).",
"Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks (Wasserman-Pritsker et al., 2015; Hoffmann et al., 2011; Yang and Mitchell, 2017) .",
"Recently, the concept of knowledge graph embedding has been presented and quickly become a hot research topic.",
"The key idea there is to embed components of a KG (i.e., entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG.",
"Early works on this topic learned such vectorial representations (i.e., embeddings) via just simple models developed over KG triples (Bordes et al., 2011 (Bordes et al., , 2013 Jenatton et al., 2012; Nickel et al., 2011) .",
"Recent attempts focused on either designing more complicated triple scoring models (Socher et al., 2013; Bordes et al., 2014; Wang et al., 2014; Lin et al., 2015b; Xiao et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , or incorporating extra information beyond KG triples (Chang et al., 2014; Zhong et al., 2015; Lin et al., 2015a; Neelakantan et al., 2015; Luo et al., 2015b; Xie et al., 2016a,b; Xiao et al., 2017) .",
"See (Wang et al., 2017) for a thorough review.",
"This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task.",
"Specifically, we examine two types of constraints: (i) non-negativity constraints on entity representations and (ii) approximate entailment constraints over relation representations.",
"By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability (Murphy et al., 2012) .",
"By using the latter, we further encode regularities of logical entailment between relations into their distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction (Rocktäschel et al., 2015; Guo et al., 2016) .",
"These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity.",
"Our work has some similarities to those which integrate logical background knowledge into KG embedding (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"Most of such works, however, need grounding of first-order logic rules.",
"The grounding process could be time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"(2016) tried to model rules using only relation representations.",
"But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities.",
"Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create.",
"Minervini et al.",
"(2017b) proposed adversarial training which can integrate first-order logic rules without grounding.",
"But their work, again, focuses on strict, hard rules.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of rules.",
"But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG.",
"Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, i.e., non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities.",
"We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well.",
"Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability.",
"The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space.",
"The remainder of this paper is organized as follows.",
"We first review related work in Section 2, and then detail our approach in Section 3.",
"Experiments and results are reported in Section 4, followed by concluding remarks in Section 5.",
"Related Work Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a.",
"KG embedding.",
"Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, e.g., TransE which takes relations as translating operations between head and tail entities (Bordes et al., 2013) , and RESCAL which models triples through bilinear operations over entity and relation representations (Nickel et al., 2011) .",
"Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, e.g., the TransE extensions (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015) , the RESCAL extensions (Yang et al., 2015; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , and the (deep) neural network models (Socher et al., 2013; Bordes et al., 2014; Shi and Weninger, 2017; Schlichtkrull et al., 2017; Dettmers et al., 2018) ; (ii) those which tried to integrate extra information beyond triples, e.g., entity types Xie et al., 2016b) , relation paths (Neelakantan et al., 2015; Lin et al., 2015a) , and textual descriptions (Xie et al., 2016a; Xiao et al., 2017) .",
"Please refer to (Nickel et al., 2016a; Wang et al., 2017) for a thorough review of these techniques.",
"In this paper, we show the potential of using very simple constraints (i.e., nonnegativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity.",
"A line of research related to ours is KG embedding with logical background knowledge incorporated (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"Both works, however, can only handle strict, hard rules which usually require extensive effort to create.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of background knowledge.",
"But their work considers only equivalence and inversion between relations, which might not always be available in a given KG.",
"Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding.",
"And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs.",
"Non-negativity has long been a subject studied in various research fields.",
"Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability (Lee and Seung, 1999) .",
"In many NLP-related tasks, nonnegativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition (Murphy et al., 2012; Luo et al., 2015a; Fyshe et al., 2015) .",
"In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability.",
"Our Approach This section presents our approach.",
"We first introduce a basic embedding technique to model triples in a given KG ( § 3.1).",
"Then we discuss the nonnegativity constraints over entity representations ( § 3.2) and the approximate entailment constraints over relation representations ( § 3.3) .",
"And finally we present the overall model ( § 3.4).",
"A Basic Embedding Model We choose ComplEx (Trouillon et al., 2016) as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance.",
"Specifically, suppose we are given a KG containing a set of triples O = {(e i , r k , e j )}, with each triple composed of two entities e i , e j ∈ E and their relation r k ∈ R. Here E is the set of entities and R the set of relations.",
"ComplEx then represents each entity e ∈ E as a complex-valued vector e ∈ C d , and each relation r ∈ R a complex-valued vector r ∈ C d , where d is the dimensionality of the embedding space.",
"Each x ∈ C d consists of a real vector component Re(x) and an imaginary vector component Im(x), i.e., x = Re(x) + iIm(x).",
"For any given triple (e i , r k , e j ) ∈ E × R × E, a multilinear dot product is used to score that triple, i.e., φ(e i , r k , e j ) Re( e i , r k ,ē j ) Re( [e i ] [r k ] [ē j ] ), (1) where e i , r k , e j ∈ C d are the vectorial representations associated with e i , r k , e j , respectively;ē j is the conjugate of e j ; [·] is the -th entry of a vector; and Re(·) means taking the real part of a complex value.",
"Triples with higher φ(·, ·, ·) scores are more likely to be true.",
"Owing to the asymmetry of this scoring function, i.e., φ(e i , r k , e j ) = φ(e j , r k , e i ), ComplEx can effectively handle asymmetric relations (Trouillon et al., 2016) .",
"Non-negativity of Entity Representations On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations.",
"In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions.",
"In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation.",
"However, as pointed out by Murphy et al.",
"(2012) , it would be uneconomical to store all negative properties of an entity or a concept.",
"For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication.",
"Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations.",
"To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of [0, 1] d , as approximately Boolean embeddings (Kruszewski et al., 2015) , i.e., 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (2) where e ∈ C d is the representation for entity e ∈ E, with its real and imaginary components denoted by Re(e), Im(e) ∈ R d ; 0 and 1 are d-dimensional vectors with all their entries being 0 or 1; and ≥, ≤ , = denote the entry-wise comparisons throughout the paper whenever applicable.",
"As shown by Lee and Seung (1999) , non-negativity, in most cases, will further induce sparsity and interpretability.",
"Approximate Entailment for Relations Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations.",
"By approximate entailment, we mean an ordered pair of relations that the former approximately entails the latter, e.g., BornInCountry and Nationality, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country.",
"Each such relation pair is associated with a weight to indicate the confidence level of entailment.",
"A larger weight stands for a higher level of confidence.",
"We denote by r p λ − → r q the approximate entailment between relations r p and r q , with confidence level λ.",
"This kind of entailment can be derived automatically from a KG by modern rule mining systems (Galárraga et al., 2015) .",
"Let T denote the set of all such approximate entailments derived beforehand.",
"Before diving into approximate entailment, we first explore the modeling of strict entailment, i.e., entailment with infinite confidence level λ = +∞.",
"The strict entailment r p → r q states that if relation r p holds then relation r q must also hold.",
"This entailment can be roughly modelled by requiring φ(e i , r p , e j ) ≤ φ(e i , r q , e j ), ∀e i , e j ∈ E, (3) where φ(·, ·, ·) is the score for a triple predicted by the embedding model, defined by Eq.",
"(1).",
"Eq.",
"(3) can be interpreted as follows: for any two entities e i and e j , if (e i , r p , e j ) is a true fact with a high score φ(e i , r p , e j ), then the triple (e i , r q , e j ) with an even higher score should also be predicted as a true fact by the embedding model.",
"Note that given the non-negativity constraints defined by Eq.",
"(2), a sufficient condition for Eq.",
"(3) to hold, is to further impose Re(r p ) ≤ Re(r q ), Im(r p ) = Im(r q ), (4) where r p and r q are the complex-valued representations for r p and r q respectively, with the real and imaginary components denoted by Re(·), Im(·) ∈ R d .",
"That means, when the constraints of Eq.",
"(4) (along with those of Eq.",
"(2)) are satisfied, the requirement of Eq.",
"(3) (or in other words r p → r q ) will always hold.",
"We provide a proof of sufficiency as supplementary material.",
"Next we examine the modeling of approximate entailment.",
"To this end, we further introduce the confidence level λ and allow slackness in Eq.",
"(4), which yields λ Re(r p ) − Re(r q ) ≤ α, (5) λ Im(r p ) − Im(r q ) 2 ≤ β.",
"(6) Here α, β ≥ 0 are slack variables, and (·) 2 means an entry-wise operation.",
"Entailments with higher confidence levels show less tolerance for violating the constraints.",
"When λ = +∞, Eqs.",
"(5) -(6) degenerate to Eq.",
"(4).",
"The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible (e i , e j ) entity pairs (i.e., grounding).",
"In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty.",
"The Overall Model Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations.",
"The overall model is presented as follows: min Θ,{α,β} D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T 1 (α + β) + η Θ 2 2 , s.t.",
"λ Re(r p ) − Re(r q ) ≤ α, λ Im(r p ) − Im(r q ) 2 ≤ β, α, β ≥ 0, ∀r p λ − → r q ∈ T , 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E. (7) Here, Θ {e : e ∈ E} ∪ {r : r ∈ R} is the set of all entity and relation representations; D + and D − are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, i.e., (e i , r k , e j ) ∈ O; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, i.e., (e i , r k , e j ) or (e i , r k , e j ); y ijk = ±1 is the label (positive or negative) of triple (e i , r k , e j ).",
"In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels.",
"The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient µ ≥ 0.",
"The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied.",
"The last term is L 2 regularization to avoid over-fitting, and η ≥ 0 is the regularization coefficient.",
"To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are.",
"As such, the optimization problem of Eq.",
"(7) can be rewritten as: min Θ D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T λ1 Re(r p )−Re(r q ) + + µ T λ1 Im(r p )−Im(r q ) 2 + η Θ 2 2 , s.t.",
"0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (8) where [x] + = max(0, x) with max(·, ·) being an entry-wise operation.",
"The equivalence between Eq.",
"(7) and Eq.",
"(8) is shown in the supplementary material.",
"We use SGD in mini-batch mode as our optimizer, with AdaGrad (Duchi et al., 2011) to tune the learning rate.",
"After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of [0, 1] d , to satisfy the non-negativity constraints.",
"While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity.",
"Our approach has a space complexity of O(nd + md), which is the same as that of ComplEx.",
"Here, n is the number of entities, m the number of relations, and O(nd + md) to store a d-dimensional complex-valued vector for each entity and each relation.",
"The time complexity (per iteration) of our approach is O(sd+td+nd), where s is the average number of triples in a mini-batch,n the average number of entities in a mini-batch, and t the total number of approximate entailments in T .",
"O(sd) is to handle triples in a mini-batch, O(td) penalty terms introduced by the approximate entailments, and O(nd) further the non-negativity constraints on entity representations.",
"Usually there are much fewer entailments than triples, i.e., t s, and alsō n ≤ 2s.",
"1 So the time complexity of our approach is on a par with O(sd), i.e., the time complexity of ComplEx.",
"Experiments and Results This section presents our experiments and results.",
"We first introduce the datasets used in our experiments ( § 4.1).",
"Then we empirically evaluate our approach in the link prediction task ( § 4.2).",
"After that, we conduct extensive analysis on both entity representations ( § 4.3) and relation representations ( § 4.4) to show the interpretability of our model.",
"1 There will be at most 2s entities contained in s triples.",
"Code and data used in the experiments are available at https://github.com/iieir-km/ ComplEx-NNE_AER.",
"Datasets The first two datasets we used are WN18 and F-B15K, released by Bordes et al.",
"(2013) .",
"2 WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities.",
"We create our third dataset from the mapping-based objects of core DBpedia.",
"3 We eliminate relations not included within the DBpedia ontology such as HomePage and Logo, and discard entities appearing less than 20 times.",
"The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities.",
"Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively.",
"We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K.",
"We further use AMIE+ (Galárraga et al., 2015) 4 to extract approximate entailments automatically from the training set of each dataset.",
"As suggested by Guo et al.",
"(2018) , we consider entailments with PCA confidence higher than 0.8.",
"5 As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K.",
"Table 1 gives some examples of these approximate entailments, along with their confidence levels.",
"Table 2 further summarizes the statistics of the datasets.",
"Link Prediction We first evaluate our approach in the link prediction task, which aims to predict a triple (e i , r k , e j ) with e i or e j missing, i.e., predict e i given (r k , e j ) or predict e j given (e i , r k ).",
"Evaluation Protocol: We follow the protocol introduced by Bordes et al.",
"(2013) .",
"For each test triple (e i , r k , e j ), we replace its head entity e i with every entity e i ∈ E, and calculate a score for the corrupted triple (e i , r k , e j ), e.g., φ(e i , r k , e j ) defined by Eq.",
"(1).",
"Then we sort these scores in de- scending order, and get the rank of the correct entity e i .",
"During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, i.e., the filtered setting as described in (Bordes et al., 2013) .",
"This whole procedure is repeated while replacing the tail entity e j .",
"We report on the test set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (HITS@N), with n = 1, 3, 10.",
"Comparison Settings: We compare the performance of our approach against a variety of KG embedding models developed in recent years.",
"These models can be categorized into three groups: • Simple embedding models that utilize triples alone without integrating extra information, including TransE (Bordes et al., 2013) , Dist-Mult (Yang et al., 2015) , HolE (Nickel et al., 2016b) , ComplEx (Trouillon et al., 2016) , and ANALOGY (Liu et al., 2017) .",
"Our approach is developed on the basis of ComplEx.",
"• Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE (Guo et al., 2018) and ComplEx R (Minervini et al., 2017a) .",
"The former requires grounding of first-order logic rules.",
"The latter is restricted to relation equiv-alence and inversion, and assigns an identical confidence level to all different rules.",
"• Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and F-B15K, including R-GCN (Schlichtkrull et al., 2017) , ConvE (Dettmers et al., 2018) , and Single DistMult (Kadlec et al., 2017) .",
"6 The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models.",
"The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time.",
"We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the Non-Negativity constraints on Entity representations, i.e., optimization Eq.",
"(8) with µ = 0; and (ii) ComplEx-NNE+AER that further imposes the Approximate Entailment constraints over Relation representations besides those non-negativity ones, i.e., optimization Eq.",
"(8) with µ > 0.",
"Implementation Details: We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K.",
"We directly report their original results on these two datasets to avoid re-implementation bias.",
"On DB100K, the newly created dataset, we take the first two groups of baselines, i.e., those simple embedding models and ComplEx extensions with logical background knowledge incorporated.",
"We do not use the third group of baselines due to efficiency and complexity issues.",
"We use the code provided by Trouillon et al.",
"(2016) 7 for TransE, DistMult, and ComplEx, and the code released by their authors for ANAL-OGY 8 and RUGE 9 .",
"We re-implement HolE and ComplEx R so that all the baselines (as well as our approach) share the same optimization mode, i.e., SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.",
"10 We follow Trouillon et al.",
"(2016) to adopt a ranking loss for TransE and a logistic loss for all the other methods.",
"(Trouillon et al., 2016) .",
"Results for the other baselines are taken from the original papers.",
"Missing scores not reported in the literature are indicated by \"-\".",
"Best scores are highlighted in bold, and \" * \" indicates statistically significant improvements over ComplEx.",
"Table 4 : Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by \" * \".",
"Among those baselines, RUGE and ComplEx R require additional logical background knowledge.",
"RUGE makes use of soft rules, which are extracted by AMIE+ from the training sets.",
"As suggested by Guo et al.",
"(2018) , length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized.",
"Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8.",
"But it only considers entailments between a pair of relations, i.e., length-1 rules.",
"ComplEx R takes into account equivalence and inversion between relations.",
"We derive such axioms directly from our approximate entailments.",
"If r p λ 1 − → r q and r q λ 2 − → r p with λ 1 , λ 2 > 0.8, we think relations r p and r q are equivalent.",
"And similarly, if r −1 p λ 1 − → r q and r −1 q λ 2 − → r p with λ 1 , λ 2 > 0.8, we consider r p as an inverse of r q .",
"For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set.",
"Specifically, we tune the embedding size d ∈ {100, 150, 200}, the L 2 regularization coefficient η ∈ {0.001, 0.003, 0.01, 0.03, 0.1}, the ratio of negative over positive training examples α ∈ {2, 10}, and the initial learning rate γ ∈ {0.01, 0.05, 0.1, 0.5, 1.0}.",
"For TransE, we tune the margin of the ranking loss δ ∈ {0.1, 0.2, 0.5, 1, 2, 5, 10}.",
"Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors (Liu et al., 2017; Guo et al., 2018) .",
"After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER (µ in Eq.",
"(8)) in the range of {10 −5 , 10 −4 , · · · , 10 4 , 10 5 }, with all its other hyperparameters fixed to their optimal configurations.",
"We then directly set µ = 0 to get the optimal ComplEx-NNE model.",
"The weight of soft constraints in ComplEx R is tuned in the same range as µ.",
"The optimal configurations for our approach are: d = 200, η = 0.03, α = 10, γ = 1.0, µ = 10 on WN18; d = 200, η = 0.01, α = 10, γ = 0.5, µ = 10 −3 on FB15K; and d = 150, η = 0.03, α = 10, γ = 0.1, µ = 10 −5 on DB100K.",
"Experimental Results: Table 3 presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from previous literature.",
"Table 4 further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting.",
"On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test.",
"The reciprocal rank or HITS@N value with n = 1, 3, 10 for each test triple is used as paired data.",
"The symbol \" * \" indicates a significance level of p < 0.05.",
"The results demonstrate that imposing the nonnegativity and approximate entailment constraints indeed improves KG embedding.",
"ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18).",
"More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplEx R , and even the complicated developments or implementations like ConvE or Single DistMult.",
"This demonstrates the superiority of our approach.",
"Analysis on Entity Representations This section inspects how the structure of the entity embedding space changes when the constraints are imposed.",
"We first provide the visualization of entity representations on DB100K.",
"On this dataset each entity is associated with a single type label.",
"11 We pick 4 types reptile, wine region, species, and programming language, and randomly select 30 entities from each type.",
"Figure 1 visualizes with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated.",
"Then we investigate the semantic purity of these dimensions.",
"Specifically, we collect the representations of all the entities on DB100K (real components only).",
"For each dimension of these representations, top K percent of entities with the highest activation values on this dimension are picked.",
"We can calculate the entropy of the type distribution of the entities selected.",
"This entropy reflects diversity of entity types, or in other words, semantic purity.",
"If all the K percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity).",
"On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity).",
"Figure 2 AER, as K varies.",
"We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity.",
"We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena.",
"The results are given as supplementary material.",
"Analysis on Relation Representations This section further provides a visual inspection of the relation embedding space when the constraints are imposed.",
"To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.",
"12 We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure 3 , where for each relation we randomly pick 5 dimensions from both its real and imaginary components.",
"By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well.",
"Pairs of relations from the first class (equivalence) tend to have identical representations r p ≈ r q , those from the second class (inversion) complex conjugate representations r p ≈r q ; and the others representations that Re(r p ) ≤ Re(r q ) and Im(r p ) ≈ Im(r q ).",
"12 Equivalence and inversion are detected using heuristics introduced in § 4.2 (implementation details).",
"See the supplementary material for detailed properties of these three classes.",
"Conclusion This paper investigates the potential of using very simple constraints to improve KG embedding.",
"Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations.",
"Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity.",
"Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines.",
"The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space.",
"Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek.",
"2014"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Our Approach",
"A Basic Embedding Model",
"Non-negativity of Entity Representations",
"Approximate Entailment for Relations",
"The Overall Model",
"Experiments and Results",
"Datasets",
"Link Prediction",
"Analysis on Entity Representations",
"Analysis on Relation Representations",
"Conclusion"
]
} | GEM-SciDuet-train-121#paper-1331#slide-6 | Basic embedding model ComplEx | Entity and relation representations: complex-valued vectors
Re Im Re Im
Triple scoring function: multi-linear dot product
Triples with higher scores are more likely to be true | Entity and relation representations: complex-valued vectors
Re Im Re Im
Triple scoring function: multi-linear dot product
Triples with higher scores are more likely to be true | [] |
GEM-SciDuet-train-121#paper-1331#slide-7 | 1331 | Improving Knowledge Graph Embedding Using Simple Constraints | Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239
],
"paper_content_text": [
"Introduction The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), e.g., Freebase (Bollacker et al., 2008) , DBpedia (Lehmann et al., 2015) , and Google's Knowledge * Corresponding author: Quan Wang.",
"Vault (Dong et al., 2014) .",
"A typical KG is a multirelational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form (head entity, relation, tail entity).",
"Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks (Wasserman-Pritsker et al., 2015; Hoffmann et al., 2011; Yang and Mitchell, 2017) .",
"Recently, the concept of knowledge graph embedding has been presented and quickly become a hot research topic.",
"The key idea there is to embed components of a KG (i.e., entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG.",
"Early works on this topic learned such vectorial representations (i.e., embeddings) via just simple models developed over KG triples (Bordes et al., 2011 (Bordes et al., , 2013 Jenatton et al., 2012; Nickel et al., 2011) .",
"Recent attempts focused on either designing more complicated triple scoring models (Socher et al., 2013; Bordes et al., 2014; Wang et al., 2014; Lin et al., 2015b; Xiao et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , or incorporating extra information beyond KG triples (Chang et al., 2014; Zhong et al., 2015; Lin et al., 2015a; Neelakantan et al., 2015; Luo et al., 2015b; Xie et al., 2016a,b; Xiao et al., 2017) .",
"See (Wang et al., 2017) for a thorough review.",
"This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task.",
"Specifically, we examine two types of constraints: (i) non-negativity constraints on entity representations and (ii) approximate entailment constraints over relation representations.",
"By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability (Murphy et al., 2012) .",
"By using the latter, we further encode regularities of logical entailment between relations into their distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction (Rocktäschel et al., 2015; Guo et al., 2016) .",
"These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity.",
"Our work has some similarities to those which integrate logical background knowledge into KG embedding (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"Most of such works, however, need grounding of first-order logic rules.",
"The grounding process could be time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"(2016) tried to model rules using only relation representations.",
"But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities.",
"Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create.",
"Minervini et al.",
"(2017b) proposed adversarial training which can integrate first-order logic rules without grounding.",
"But their work, again, focuses on strict, hard rules.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of rules.",
"But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG.",
"Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, i.e., non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities.",
"We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well.",
"Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability.",
"The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space.",
"The remainder of this paper is organized as follows.",
"We first review related work in Section 2, and then detail our approach in Section 3.",
"Experiments and results are reported in Section 4, followed by concluding remarks in Section 5.",
"Related Work Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a.",
"KG embedding.",
"Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, e.g., TransE which takes relations as translating operations between head and tail entities (Bordes et al., 2013) , and RESCAL which models triples through bilinear operations over entity and relation representations (Nickel et al., 2011) .",
"Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, e.g., the TransE extensions (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015) , the RESCAL extensions (Yang et al., 2015; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , and the (deep) neural network models (Socher et al., 2013; Bordes et al., 2014; Shi and Weninger, 2017; Schlichtkrull et al., 2017; Dettmers et al., 2018) ; (ii) those which tried to integrate extra information beyond triples, e.g., entity types Xie et al., 2016b) , relation paths (Neelakantan et al., 2015; Lin et al., 2015a) , and textual descriptions (Xie et al., 2016a; Xiao et al., 2017) .",
"Please refer to (Nickel et al., 2016a; Wang et al., 2017) for a thorough review of these techniques.",
"In this paper, we show the potential of using very simple constraints (i.e., nonnegativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity.",
"A line of research related to ours is KG embedding with logical background knowledge incorporated (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"Both works, however, can only handle strict, hard rules which usually require extensive effort to create.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of background knowledge.",
"But their work considers only equivalence and inversion between relations, which might not always be available in a given KG.",
"Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding.",
"And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs.",
"Non-negativity has long been a subject studied in various research fields.",
"Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability (Lee and Seung, 1999) .",
"In many NLP-related tasks, nonnegativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition (Murphy et al., 2012; Luo et al., 2015a; Fyshe et al., 2015) .",
"In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability.",
"Our Approach This section presents our approach.",
"We first introduce a basic embedding technique to model triples in a given KG ( § 3.1).",
"Then we discuss the nonnegativity constraints over entity representations ( § 3.2) and the approximate entailment constraints over relation representations ( § 3.3) .",
"And finally we present the overall model ( § 3.4).",
"A Basic Embedding Model We choose ComplEx (Trouillon et al., 2016) as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance.",
"Specifically, suppose we are given a KG containing a set of triples O = {(e i , r k , e j )}, with each triple composed of two entities e i , e j ∈ E and their relation r k ∈ R. Here E is the set of entities and R the set of relations.",
"ComplEx then represents each entity e ∈ E as a complex-valued vector e ∈ C d , and each relation r ∈ R a complex-valued vector r ∈ C d , where d is the dimensionality of the embedding space.",
"Each x ∈ C d consists of a real vector component Re(x) and an imaginary vector component Im(x), i.e., x = Re(x) + iIm(x).",
"For any given triple (e i , r k , e j ) ∈ E × R × E, a multilinear dot product is used to score that triple, i.e., φ(e i , r k , e j ) Re( e i , r k ,ē j ) Re( [e i ] [r k ] [ē j ] ), (1) where e i , r k , e j ∈ C d are the vectorial representations associated with e i , r k , e j , respectively;ē j is the conjugate of e j ; [·] is the -th entry of a vector; and Re(·) means taking the real part of a complex value.",
"Triples with higher φ(·, ·, ·) scores are more likely to be true.",
"Owing to the asymmetry of this scoring function, i.e., φ(e i , r k , e j ) = φ(e j , r k , e i ), ComplEx can effectively handle asymmetric relations (Trouillon et al., 2016) .",
"Non-negativity of Entity Representations On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations.",
"In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions.",
"In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation.",
"However, as pointed out by Murphy et al.",
"(2012) , it would be uneconomical to store all negative properties of an entity or a concept.",
"For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication.",
"Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations.",
"To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of [0, 1] d , as approximately Boolean embeddings (Kruszewski et al., 2015) , i.e., 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (2) where e ∈ C d is the representation for entity e ∈ E, with its real and imaginary components denoted by Re(e), Im(e) ∈ R d ; 0 and 1 are d-dimensional vectors with all their entries being 0 or 1; and ≥, ≤ , = denote the entry-wise comparisons throughout the paper whenever applicable.",
"As shown by Lee and Seung (1999) , non-negativity, in most cases, will further induce sparsity and interpretability.",
"Approximate Entailment for Relations Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations.",
"By approximate entailment, we mean an ordered pair of relations that the former approximately entails the latter, e.g., BornInCountry and Nationality, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country.",
"Each such relation pair is associated with a weight to indicate the confidence level of entailment.",
"A larger weight stands for a higher level of confidence.",
"We denote by r p λ − → r q the approximate entailment between relations r p and r q , with confidence level λ.",
"This kind of entailment can be derived automatically from a KG by modern rule mining systems (Galárraga et al., 2015) .",
"Let T denote the set of all such approximate entailments derived beforehand.",
"Before diving into approximate entailment, we first explore the modeling of strict entailment, i.e., entailment with infinite confidence level λ = +∞.",
"The strict entailment r p → r q states that if relation r p holds then relation r q must also hold.",
"This entailment can be roughly modelled by requiring φ(e i , r p , e j ) ≤ φ(e i , r q , e j ), ∀e i , e j ∈ E, (3) where φ(·, ·, ·) is the score for a triple predicted by the embedding model, defined by Eq.",
"(1).",
"Eq.",
"(3) can be interpreted as follows: for any two entities e i and e j , if (e i , r p , e j ) is a true fact with a high score φ(e i , r p , e j ), then the triple (e i , r q , e j ) with an even higher score should also be predicted as a true fact by the embedding model.",
"Note that given the non-negativity constraints defined by Eq.",
"(2), a sufficient condition for Eq.",
"(3) to hold, is to further impose Re(r p ) ≤ Re(r q ), Im(r p ) = Im(r q ), (4) where r p and r q are the complex-valued representations for r p and r q respectively, with the real and imaginary components denoted by Re(·), Im(·) ∈ R d .",
"That means, when the constraints of Eq.",
"(4) (along with those of Eq.",
"(2)) are satisfied, the requirement of Eq.",
"(3) (or in other words r p → r q ) will always hold.",
"We provide a proof of sufficiency as supplementary material.",
"Next we examine the modeling of approximate entailment.",
"To this end, we further introduce the confidence level λ and allow slackness in Eq.",
"(4), which yields λ Re(r p ) − Re(r q ) ≤ α, (5) λ Im(r p ) − Im(r q ) 2 ≤ β.",
"(6) Here α, β ≥ 0 are slack variables, and (·) 2 means an entry-wise operation.",
"Entailments with higher confidence levels show less tolerance for violating the constraints.",
"When λ = +∞, Eqs.",
"(5) -(6) degenerate to Eq.",
"(4).",
"The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible (e i , e j ) entity pairs (i.e., grounding).",
"In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty.",
"The Overall Model Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations.",
"The overall model is presented as follows: min Θ,{α,β} D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T 1 (α + β) + η Θ 2 2 , s.t.",
"λ Re(r p ) − Re(r q ) ≤ α, λ Im(r p ) − Im(r q ) 2 ≤ β, α, β ≥ 0, ∀r p λ − → r q ∈ T , 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E. (7) Here, Θ {e : e ∈ E} ∪ {r : r ∈ R} is the set of all entity and relation representations; D + and D − are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, i.e., (e i , r k , e j ) ∈ O; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, i.e., (e i , r k , e j ) or (e i , r k , e j ); y ijk = ±1 is the label (positive or negative) of triple (e i , r k , e j ).",
"In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels.",
"The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient µ ≥ 0.",
"The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied.",
"The last term is L 2 regularization to avoid over-fitting, and η ≥ 0 is the regularization coefficient.",
"To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are.",
"As such, the optimization problem of Eq.",
"(7) can be rewritten as: min Θ D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T λ1 Re(r p )−Re(r q ) + + µ T λ1 Im(r p )−Im(r q ) 2 + η Θ 2 2 , s.t.",
"0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (8) where [x] + = max(0, x) with max(·, ·) being an entry-wise operation.",
"The equivalence between Eq.",
"(7) and Eq.",
"(8) is shown in the supplementary material.",
"We use SGD in mini-batch mode as our optimizer, with AdaGrad (Duchi et al., 2011) to tune the learning rate.",
"After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of [0, 1] d , to satisfy the non-negativity constraints.",
"While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity.",
"Our approach has a space complexity of O(nd + md), which is the same as that of ComplEx.",
"Here, n is the number of entities, m the number of relations, and O(nd + md) to store a d-dimensional complex-valued vector for each entity and each relation.",
"The time complexity (per iteration) of our approach is O(sd+td+nd), where s is the average number of triples in a mini-batch,n the average number of entities in a mini-batch, and t the total number of approximate entailments in T .",
"O(sd) is to handle triples in a mini-batch, O(td) penalty terms introduced by the approximate entailments, and O(nd) further the non-negativity constraints on entity representations.",
"Usually there are much fewer entailments than triples, i.e., t s, and alsō n ≤ 2s.",
"1 So the time complexity of our approach is on a par with O(sd), i.e., the time complexity of ComplEx.",
"Experiments and Results This section presents our experiments and results.",
"We first introduce the datasets used in our experiments ( § 4.1).",
"Then we empirically evaluate our approach in the link prediction task ( § 4.2).",
"After that, we conduct extensive analysis on both entity representations ( § 4.3) and relation representations ( § 4.4) to show the interpretability of our model.",
"1 There will be at most 2s entities contained in s triples.",
"Code and data used in the experiments are available at https://github.com/iieir-km/ ComplEx-NNE_AER.",
"Datasets The first two datasets we used are WN18 and F-B15K, released by Bordes et al.",
"(2013) .",
"2 WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities.",
"We create our third dataset from the mapping-based objects of core DBpedia.",
"3 We eliminate relations not included within the DBpedia ontology such as HomePage and Logo, and discard entities appearing less than 20 times.",
"The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities.",
"Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively.",
"We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K.",
"We further use AMIE+ (Galárraga et al., 2015) 4 to extract approximate entailments automatically from the training set of each dataset.",
"As suggested by Guo et al.",
"(2018) , we consider entailments with PCA confidence higher than 0.8.",
"5 As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K.",
"Table 1 gives some examples of these approximate entailments, along with their confidence levels.",
"Table 2 further summarizes the statistics of the datasets.",
"Link Prediction We first evaluate our approach in the link prediction task, which aims to predict a triple (e i , r k , e j ) with e i or e j missing, i.e., predict e i given (r k , e j ) or predict e j given (e i , r k ).",
"Evaluation Protocol: We follow the protocol introduced by Bordes et al.",
"(2013) .",
"For each test triple (e i , r k , e j ), we replace its head entity e i with every entity e i ∈ E, and calculate a score for the corrupted triple (e i , r k , e j ), e.g., φ(e i , r k , e j ) defined by Eq.",
"(1).",
"Then we sort these scores in de- scending order, and get the rank of the correct entity e i .",
"During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, i.e., the filtered setting as described in (Bordes et al., 2013) .",
"This whole procedure is repeated while replacing the tail entity e j .",
"We report on the test set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (HITS@N), with n = 1, 3, 10.",
"Comparison Settings: We compare the performance of our approach against a variety of KG embedding models developed in recent years.",
"These models can be categorized into three groups: • Simple embedding models that utilize triples alone without integrating extra information, including TransE (Bordes et al., 2013) , Dist-Mult (Yang et al., 2015) , HolE (Nickel et al., 2016b) , ComplEx (Trouillon et al., 2016) , and ANALOGY (Liu et al., 2017) .",
"Our approach is developed on the basis of ComplEx.",
"• Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE (Guo et al., 2018) and ComplEx R (Minervini et al., 2017a) .",
"The former requires grounding of first-order logic rules.",
"The latter is restricted to relation equiv-alence and inversion, and assigns an identical confidence level to all different rules.",
"• Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and F-B15K, including R-GCN (Schlichtkrull et al., 2017) , ConvE (Dettmers et al., 2018) , and Single DistMult (Kadlec et al., 2017) .",
"6 The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models.",
"The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time.",
"We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the Non-Negativity constraints on Entity representations, i.e., optimization Eq.",
"(8) with µ = 0; and (ii) ComplEx-NNE+AER that further imposes the Approximate Entailment constraints over Relation representations besides those non-negativity ones, i.e., optimization Eq.",
"(8) with µ > 0.",
"Implementation Details: We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K.",
"We directly report their original results on these two datasets to avoid re-implementation bias.",
"On DB100K, the newly created dataset, we take the first two groups of baselines, i.e., those simple embedding models and ComplEx extensions with logical background knowledge incorporated.",
"We do not use the third group of baselines due to efficiency and complexity issues.",
"We use the code provided by Trouillon et al.",
"(2016) 7 for TransE, DistMult, and ComplEx, and the code released by their authors for ANAL-OGY 8 and RUGE 9 .",
"We re-implement HolE and ComplEx R so that all the baselines (as well as our approach) share the same optimization mode, i.e., SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.",
"10 We follow Trouillon et al.",
"(2016) to adopt a ranking loss for TransE and a logistic loss for all the other methods.",
"(Trouillon et al., 2016) .",
"Results for the other baselines are taken from the original papers.",
"Missing scores not reported in the literature are indicated by \"-\".",
"Best scores are highlighted in bold, and \" * \" indicates statistically significant improvements over ComplEx.",
"Table 4 : Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by \" * \".",
"Among those baselines, RUGE and ComplEx R require additional logical background knowledge.",
"RUGE makes use of soft rules, which are extracted by AMIE+ from the training sets.",
"As suggested by Guo et al.",
"(2018) , length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized.",
"Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8.",
"But it only considers entailments between a pair of relations, i.e., length-1 rules.",
"ComplEx R takes into account equivalence and inversion between relations.",
"We derive such axioms directly from our approximate entailments.",
"If r p λ 1 − → r q and r q λ 2 − → r p with λ 1 , λ 2 > 0.8, we think relations r p and r q are equivalent.",
"And similarly, if r −1 p λ 1 − → r q and r −1 q λ 2 − → r p with λ 1 , λ 2 > 0.8, we consider r p as an inverse of r q .",
"For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set.",
"Specifically, we tune the embedding size d ∈ {100, 150, 200}, the L 2 regularization coefficient η ∈ {0.001, 0.003, 0.01, 0.03, 0.1}, the ratio of negative over positive training examples α ∈ {2, 10}, and the initial learning rate γ ∈ {0.01, 0.05, 0.1, 0.5, 1.0}.",
"For TransE, we tune the margin of the ranking loss δ ∈ {0.1, 0.2, 0.5, 1, 2, 5, 10}.",
"Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors (Liu et al., 2017; Guo et al., 2018) .",
"After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER (µ in Eq.",
"(8)) in the range of {10 −5 , 10 −4 , · · · , 10 4 , 10 5 }, with all its other hyperparameters fixed to their optimal configurations.",
"We then directly set µ = 0 to get the optimal ComplEx-NNE model.",
"The weight of soft constraints in ComplEx R is tuned in the same range as µ.",
"The optimal configurations for our approach are: d = 200, η = 0.03, α = 10, γ = 1.0, µ = 10 on WN18; d = 200, η = 0.01, α = 10, γ = 0.5, µ = 10 −3 on FB15K; and d = 150, η = 0.03, α = 10, γ = 0.1, µ = 10 −5 on DB100K.",
"Experimental Results: Table 3 presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from previous literature.",
"Table 4 further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting.",
"On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test.",
"The reciprocal rank or HITS@N value with n = 1, 3, 10 for each test triple is used as paired data.",
"The symbol \" * \" indicates a significance level of p < 0.05.",
"The results demonstrate that imposing the nonnegativity and approximate entailment constraints indeed improves KG embedding.",
"ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18).",
"More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplEx R , and even the complicated developments or implementations like ConvE or Single DistMult.",
"This demonstrates the superiority of our approach.",
"Analysis on Entity Representations This section inspects how the structure of the entity embedding space changes when the constraints are imposed.",
"We first provide the visualization of entity representations on DB100K.",
"On this dataset each entity is associated with a single type label.",
"11 We pick 4 types reptile, wine region, species, and programming language, and randomly select 30 entities from each type.",
"Figure 1 visualizes with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated.",
"Then we investigate the semantic purity of these dimensions.",
"Specifically, we collect the representations of all the entities on DB100K (real components only).",
"For each dimension of these representations, top K percent of entities with the highest activation values on this dimension are picked.",
"We can calculate the entropy of the type distribution of the entities selected.",
"This entropy reflects diversity of entity types, or in other words, semantic purity.",
"If all the K percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity).",
"On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity).",
"Figure 2 AER, as K varies.",
"We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity.",
"We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena.",
"The results are given as supplementary material.",
"Analysis on Relation Representations This section further provides a visual inspection of the relation embedding space when the constraints are imposed.",
"To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.",
"12 We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure 3 , where for each relation we randomly pick 5 dimensions from both its real and imaginary components.",
"By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well.",
"Pairs of relations from the first class (equivalence) tend to have identical representations r p ≈ r q , those from the second class (inversion) complex conjugate representations r p ≈r q ; and the others representations that Re(r p ) ≤ Re(r q ) and Im(r p ) ≈ Im(r q ).",
"12 Equivalence and inversion are detected using heuristics introduced in § 4.2 (implementation details).",
"See the supplementary material for detailed properties of these three classes.",
"Conclusion This paper investigates the potential of using very simple constraints to improve KG embedding.",
"Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations.",
"Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity.",
"Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines.",
"The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space.",
"Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek.",
"2014"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Our Approach",
"A Basic Embedding Model",
"Non-negativity of Entity Representations",
"Approximate Entailment for Relations",
"The Overall Model",
"Experiments and Results",
"Datasets",
"Link Prediction",
"Analysis on Entity Representations",
"Analysis on Relation Representations",
"Conclusion"
]
} | GEM-SciDuet-train-121#paper-1331#slide-7 | Non negativity of entity representations | Uneconomical to store negative properties of an entity/concept
Positive properties of cats
Cats have four legs
Negative properties of cats
Cats are not vehicles
Cats do not have wheels
Cats are not used for communication | Uneconomical to store negative properties of an entity/concept
Positive properties of cats
Cats have four legs
Negative properties of cats
Cats are not vehicles
Cats do not have wheels
Cats are not used for communication | [] |
GEM-SciDuet-train-121#paper-1331#slide-8 | 1331 | Improving Knowledge Graph Embedding Using Simple Constraints | Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239
],
"paper_content_text": [
"Introduction The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), e.g., Freebase (Bollacker et al., 2008) , DBpedia (Lehmann et al., 2015) , and Google's Knowledge * Corresponding author: Quan Wang.",
"Vault (Dong et al., 2014) .",
"A typical KG is a multirelational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form (head entity, relation, tail entity).",
"Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks (Wasserman-Pritsker et al., 2015; Hoffmann et al., 2011; Yang and Mitchell, 2017) .",
"Recently, the concept of knowledge graph embedding has been presented and quickly become a hot research topic.",
"The key idea there is to embed components of a KG (i.e., entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG.",
"Early works on this topic learned such vectorial representations (i.e., embeddings) via just simple models developed over KG triples (Bordes et al., 2011 (Bordes et al., , 2013 Jenatton et al., 2012; Nickel et al., 2011) .",
"Recent attempts focused on either designing more complicated triple scoring models (Socher et al., 2013; Bordes et al., 2014; Wang et al., 2014; Lin et al., 2015b; Xiao et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , or incorporating extra information beyond KG triples (Chang et al., 2014; Zhong et al., 2015; Lin et al., 2015a; Neelakantan et al., 2015; Luo et al., 2015b; Xie et al., 2016a,b; Xiao et al., 2017) .",
"See (Wang et al., 2017) for a thorough review.",
"This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task.",
"Specifically, we examine two types of constraints: (i) non-negativity constraints on entity representations and (ii) approximate entailment constraints over relation representations.",
"By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability (Murphy et al., 2012) .",
"By using the latter, we further encode regularities of logical entailment between relations into their distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction (Rocktäschel et al., 2015; Guo et al., 2016) .",
"These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity.",
"Our work has some similarities to those which integrate logical background knowledge into KG embedding (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"Most of such works, however, need grounding of first-order logic rules.",
"The grounding process could be time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"(2016) tried to model rules using only relation representations.",
"But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities.",
"Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create.",
"Minervini et al.",
"(2017b) proposed adversarial training which can integrate first-order logic rules without grounding.",
"But their work, again, focuses on strict, hard rules.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of rules.",
"But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG.",
"Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, i.e., non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities.",
"We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well.",
"Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability.",
"The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space.",
"The remainder of this paper is organized as follows.",
"We first review related work in Section 2, and then detail our approach in Section 3.",
"Experiments and results are reported in Section 4, followed by concluding remarks in Section 5.",
"Related Work Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a.",
"KG embedding.",
"Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, e.g., TransE which takes relations as translating operations between head and tail entities (Bordes et al., 2013) , and RESCAL which models triples through bilinear operations over entity and relation representations (Nickel et al., 2011) .",
"Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, e.g., the TransE extensions (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015) , the RESCAL extensions (Yang et al., 2015; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , and the (deep) neural network models (Socher et al., 2013; Bordes et al., 2014; Shi and Weninger, 2017; Schlichtkrull et al., 2017; Dettmers et al., 2018) ; (ii) those which tried to integrate extra information beyond triples, e.g., entity types Xie et al., 2016b) , relation paths (Neelakantan et al., 2015; Lin et al., 2015a) , and textual descriptions (Xie et al., 2016a; Xiao et al., 2017) .",
"Please refer to (Nickel et al., 2016a; Wang et al., 2017) for a thorough review of these techniques.",
"In this paper, we show the potential of using very simple constraints (i.e., nonnegativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity.",
"A line of research related to ours is KG embedding with logical background knowledge incorporated (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"Both works, however, can only handle strict, hard rules which usually require extensive effort to create.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of background knowledge.",
"But their work considers only equivalence and inversion between relations, which might not always be available in a given KG.",
"Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding.",
"And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs.",
"Non-negativity has long been a subject studied in various research fields.",
"Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability (Lee and Seung, 1999) .",
"In many NLP-related tasks, nonnegativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition (Murphy et al., 2012; Luo et al., 2015a; Fyshe et al., 2015) .",
"In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability.",
"Our Approach This section presents our approach.",
"We first introduce a basic embedding technique to model triples in a given KG ( § 3.1).",
"Then we discuss the nonnegativity constraints over entity representations ( § 3.2) and the approximate entailment constraints over relation representations ( § 3.3) .",
"And finally we present the overall model ( § 3.4).",
"A Basic Embedding Model We choose ComplEx (Trouillon et al., 2016) as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance.",
"Specifically, suppose we are given a KG containing a set of triples O = {(e i , r k , e j )}, with each triple composed of two entities e i , e j ∈ E and their relation r k ∈ R. Here E is the set of entities and R the set of relations.",
"ComplEx then represents each entity e ∈ E as a complex-valued vector e ∈ C d , and each relation r ∈ R a complex-valued vector r ∈ C d , where d is the dimensionality of the embedding space.",
"Each x ∈ C d consists of a real vector component Re(x) and an imaginary vector component Im(x), i.e., x = Re(x) + iIm(x).",
"For any given triple (e i , r k , e j ) ∈ E × R × E, a multilinear dot product is used to score that triple, i.e., φ(e i , r k , e j ) Re( e i , r k ,ē j ) Re( [e i ] [r k ] [ē j ] ), (1) where e i , r k , e j ∈ C d are the vectorial representations associated with e i , r k , e j , respectively;ē j is the conjugate of e j ; [·] is the -th entry of a vector; and Re(·) means taking the real part of a complex value.",
"Triples with higher φ(·, ·, ·) scores are more likely to be true.",
"Owing to the asymmetry of this scoring function, i.e., φ(e i , r k , e j ) = φ(e j , r k , e i ), ComplEx can effectively handle asymmetric relations (Trouillon et al., 2016) .",
"Non-negativity of Entity Representations On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations.",
"In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions.",
"In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation.",
"However, as pointed out by Murphy et al.",
"(2012) , it would be uneconomical to store all negative properties of an entity or a concept.",
"For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication.",
"Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations.",
"To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of [0, 1] d , as approximately Boolean embeddings (Kruszewski et al., 2015) , i.e., 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (2) where e ∈ C d is the representation for entity e ∈ E, with its real and imaginary components denoted by Re(e), Im(e) ∈ R d ; 0 and 1 are d-dimensional vectors with all their entries being 0 or 1; and ≥, ≤ , = denote the entry-wise comparisons throughout the paper whenever applicable.",
"As shown by Lee and Seung (1999) , non-negativity, in most cases, will further induce sparsity and interpretability.",
"Approximate Entailment for Relations Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations.",
"By approximate entailment, we mean an ordered pair of relations that the former approximately entails the latter, e.g., BornInCountry and Nationality, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country.",
"Each such relation pair is associated with a weight to indicate the confidence level of entailment.",
"A larger weight stands for a higher level of confidence.",
"We denote by r p λ − → r q the approximate entailment between relations r p and r q , with confidence level λ.",
"This kind of entailment can be derived automatically from a KG by modern rule mining systems (Galárraga et al., 2015) .",
"Let T denote the set of all such approximate entailments derived beforehand.",
"Before diving into approximate entailment, we first explore the modeling of strict entailment, i.e., entailment with infinite confidence level λ = +∞.",
"The strict entailment r p → r q states that if relation r p holds then relation r q must also hold.",
"This entailment can be roughly modelled by requiring φ(e i , r p , e j ) ≤ φ(e i , r q , e j ), ∀e i , e j ∈ E, (3) where φ(·, ·, ·) is the score for a triple predicted by the embedding model, defined by Eq.",
"(1).",
"Eq.",
"(3) can be interpreted as follows: for any two entities e i and e j , if (e i , r p , e j ) is a true fact with a high score φ(e i , r p , e j ), then the triple (e i , r q , e j ) with an even higher score should also be predicted as a true fact by the embedding model.",
"Note that given the non-negativity constraints defined by Eq.",
"(2), a sufficient condition for Eq.",
"(3) to hold, is to further impose Re(r p ) ≤ Re(r q ), Im(r p ) = Im(r q ), (4) where r p and r q are the complex-valued representations for r p and r q respectively, with the real and imaginary components denoted by Re(·), Im(·) ∈ R d .",
"That means, when the constraints of Eq.",
"(4) (along with those of Eq.",
"(2)) are satisfied, the requirement of Eq.",
"(3) (or in other words r p → r q ) will always hold.",
"We provide a proof of sufficiency as supplementary material.",
"Next we examine the modeling of approximate entailment.",
"To this end, we further introduce the confidence level λ and allow slackness in Eq.",
"(4), which yields λ Re(r p ) − Re(r q ) ≤ α, (5) λ Im(r p ) − Im(r q ) 2 ≤ β.",
"(6) Here α, β ≥ 0 are slack variables, and (·) 2 means an entry-wise operation.",
"Entailments with higher confidence levels show less tolerance for violating the constraints.",
"When λ = +∞, Eqs.",
"(5) -(6) degenerate to Eq.",
"(4).",
"The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible (e i , e j ) entity pairs (i.e., grounding).",
"In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty.",
"The Overall Model Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations.",
"The overall model is presented as follows: min Θ,{α,β} D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T 1 (α + β) + η Θ 2 2 , s.t.",
"λ Re(r p ) − Re(r q ) ≤ α, λ Im(r p ) − Im(r q ) 2 ≤ β, α, β ≥ 0, ∀r p λ − → r q ∈ T , 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E. (7) Here, Θ {e : e ∈ E} ∪ {r : r ∈ R} is the set of all entity and relation representations; D + and D − are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, i.e., (e i , r k , e j ) ∈ O; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, i.e., (e i , r k , e j ) or (e i , r k , e j ); y ijk = ±1 is the label (positive or negative) of triple (e i , r k , e j ).",
"In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels.",
"The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient µ ≥ 0.",
"The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied.",
"The last term is L 2 regularization to avoid over-fitting, and η ≥ 0 is the regularization coefficient.",
"To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are.",
"As such, the optimization problem of Eq.",
"(7) can be rewritten as: min Θ D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T λ1 Re(r p )−Re(r q ) + + µ T λ1 Im(r p )−Im(r q ) 2 + η Θ 2 2 , s.t.",
"0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (8) where [x] + = max(0, x) with max(·, ·) being an entry-wise operation.",
"The equivalence between Eq.",
"(7) and Eq.",
"(8) is shown in the supplementary material.",
"We use SGD in mini-batch mode as our optimizer, with AdaGrad (Duchi et al., 2011) to tune the learning rate.",
"After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of [0, 1] d , to satisfy the non-negativity constraints.",
"While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity.",
"Our approach has a space complexity of O(nd + md), which is the same as that of ComplEx.",
"Here, n is the number of entities, m the number of relations, and O(nd + md) to store a d-dimensional complex-valued vector for each entity and each relation.",
"The time complexity (per iteration) of our approach is O(sd+td+nd), where s is the average number of triples in a mini-batch,n the average number of entities in a mini-batch, and t the total number of approximate entailments in T .",
"O(sd) is to handle triples in a mini-batch, O(td) penalty terms introduced by the approximate entailments, and O(nd) further the non-negativity constraints on entity representations.",
"Usually there are much fewer entailments than triples, i.e., t s, and alsō n ≤ 2s.",
"1 So the time complexity of our approach is on a par with O(sd), i.e., the time complexity of ComplEx.",
"Experiments and Results This section presents our experiments and results.",
"We first introduce the datasets used in our experiments ( § 4.1).",
"Then we empirically evaluate our approach in the link prediction task ( § 4.2).",
"After that, we conduct extensive analysis on both entity representations ( § 4.3) and relation representations ( § 4.4) to show the interpretability of our model.",
"1 There will be at most 2s entities contained in s triples.",
"Code and data used in the experiments are available at https://github.com/iieir-km/ ComplEx-NNE_AER.",
"Datasets The first two datasets we used are WN18 and F-B15K, released by Bordes et al.",
"(2013) .",
"2 WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities.",
"We create our third dataset from the mapping-based objects of core DBpedia.",
"3 We eliminate relations not included within the DBpedia ontology such as HomePage and Logo, and discard entities appearing less than 20 times.",
"The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities.",
"Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively.",
"We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K.",
"We further use AMIE+ (Galárraga et al., 2015) 4 to extract approximate entailments automatically from the training set of each dataset.",
"As suggested by Guo et al.",
"(2018) , we consider entailments with PCA confidence higher than 0.8.",
"5 As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K.",
"Table 1 gives some examples of these approximate entailments, along with their confidence levels.",
"Table 2 further summarizes the statistics of the datasets.",
"Link Prediction We first evaluate our approach in the link prediction task, which aims to predict a triple (e i , r k , e j ) with e i or e j missing, i.e., predict e i given (r k , e j ) or predict e j given (e i , r k ).",
"Evaluation Protocol: We follow the protocol introduced by Bordes et al.",
"(2013) .",
"For each test triple (e i , r k , e j ), we replace its head entity e i with every entity e i ∈ E, and calculate a score for the corrupted triple (e i , r k , e j ), e.g., φ(e i , r k , e j ) defined by Eq.",
"(1).",
"Then we sort these scores in de- scending order, and get the rank of the correct entity e i .",
"During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, i.e., the filtered setting as described in (Bordes et al., 2013) .",
"This whole procedure is repeated while replacing the tail entity e j .",
"We report on the test set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (HITS@N), with n = 1, 3, 10.",
"Comparison Settings: We compare the performance of our approach against a variety of KG embedding models developed in recent years.",
"These models can be categorized into three groups: • Simple embedding models that utilize triples alone without integrating extra information, including TransE (Bordes et al., 2013) , Dist-Mult (Yang et al., 2015) , HolE (Nickel et al., 2016b) , ComplEx (Trouillon et al., 2016) , and ANALOGY (Liu et al., 2017) .",
"Our approach is developed on the basis of ComplEx.",
"• Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE (Guo et al., 2018) and ComplEx R (Minervini et al., 2017a) .",
"The former requires grounding of first-order logic rules.",
"The latter is restricted to relation equiv-alence and inversion, and assigns an identical confidence level to all different rules.",
"• Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and F-B15K, including R-GCN (Schlichtkrull et al., 2017) , ConvE (Dettmers et al., 2018) , and Single DistMult (Kadlec et al., 2017) .",
"6 The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models.",
"The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time.",
"We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the Non-Negativity constraints on Entity representations, i.e., optimization Eq.",
"(8) with µ = 0; and (ii) ComplEx-NNE+AER that further imposes the Approximate Entailment constraints over Relation representations besides those non-negativity ones, i.e., optimization Eq.",
"(8) with µ > 0.",
"Implementation Details: We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K.",
"We directly report their original results on these two datasets to avoid re-implementation bias.",
"On DB100K, the newly created dataset, we take the first two groups of baselines, i.e., those simple embedding models and ComplEx extensions with logical background knowledge incorporated.",
"We do not use the third group of baselines due to efficiency and complexity issues.",
"We use the code provided by Trouillon et al.",
"(2016) 7 for TransE, DistMult, and ComplEx, and the code released by their authors for ANAL-OGY 8 and RUGE 9 .",
"We re-implement HolE and ComplEx R so that all the baselines (as well as our approach) share the same optimization mode, i.e., SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.",
"10 We follow Trouillon et al.",
"(2016) to adopt a ranking loss for TransE and a logistic loss for all the other methods.",
"(Trouillon et al., 2016) .",
"Results for the other baselines are taken from the original papers.",
"Missing scores not reported in the literature are indicated by \"-\".",
"Best scores are highlighted in bold, and \" * \" indicates statistically significant improvements over ComplEx.",
"Table 4 : Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by \" * \".",
"Among those baselines, RUGE and ComplEx R require additional logical background knowledge.",
"RUGE makes use of soft rules, which are extracted by AMIE+ from the training sets.",
"As suggested by Guo et al.",
"(2018) , length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized.",
"Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8.",
"But it only considers entailments between a pair of relations, i.e., length-1 rules.",
"ComplEx R takes into account equivalence and inversion between relations.",
"We derive such axioms directly from our approximate entailments.",
"If r p λ 1 − → r q and r q λ 2 − → r p with λ 1 , λ 2 > 0.8, we think relations r p and r q are equivalent.",
"And similarly, if r −1 p λ 1 − → r q and r −1 q λ 2 − → r p with λ 1 , λ 2 > 0.8, we consider r p as an inverse of r q .",
"For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set.",
"Specifically, we tune the embedding size d ∈ {100, 150, 200}, the L 2 regularization coefficient η ∈ {0.001, 0.003, 0.01, 0.03, 0.1}, the ratio of negative over positive training examples α ∈ {2, 10}, and the initial learning rate γ ∈ {0.01, 0.05, 0.1, 0.5, 1.0}.",
"For TransE, we tune the margin of the ranking loss δ ∈ {0.1, 0.2, 0.5, 1, 2, 5, 10}.",
"Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors (Liu et al., 2017; Guo et al., 2018) .",
"After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER (µ in Eq.",
"(8)) in the range of {10 −5 , 10 −4 , · · · , 10 4 , 10 5 }, with all its other hyperparameters fixed to their optimal configurations.",
"We then directly set µ = 0 to get the optimal ComplEx-NNE model.",
"The weight of soft constraints in ComplEx R is tuned in the same range as µ.",
"The optimal configurations for our approach are: d = 200, η = 0.03, α = 10, γ = 1.0, µ = 10 on WN18; d = 200, η = 0.01, α = 10, γ = 0.5, µ = 10 −3 on FB15K; and d = 150, η = 0.03, α = 10, γ = 0.1, µ = 10 −5 on DB100K.",
"Experimental Results: Table 3 presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from previous literature.",
"Table 4 further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting.",
"On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test.",
"The reciprocal rank or HITS@N value with n = 1, 3, 10 for each test triple is used as paired data.",
"The symbol \" * \" indicates a significance level of p < 0.05.",
"The results demonstrate that imposing the nonnegativity and approximate entailment constraints indeed improves KG embedding.",
"ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18).",
"More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplEx R , and even the complicated developments or implementations like ConvE or Single DistMult.",
"This demonstrates the superiority of our approach.",
"Analysis on Entity Representations This section inspects how the structure of the entity embedding space changes when the constraints are imposed.",
"We first provide the visualization of entity representations on DB100K.",
"On this dataset each entity is associated with a single type label.",
"11 We pick 4 types reptile, wine region, species, and programming language, and randomly select 30 entities from each type.",
"Figure 1 visualizes with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated.",
"Then we investigate the semantic purity of these dimensions.",
"Specifically, we collect the representations of all the entities on DB100K (real components only).",
"For each dimension of these representations, top K percent of entities with the highest activation values on this dimension are picked.",
"We can calculate the entropy of the type distribution of the entities selected.",
"This entropy reflects diversity of entity types, or in other words, semantic purity.",
"If all the K percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity).",
"On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity).",
"Figure 2 AER, as K varies.",
"We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity.",
"We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena.",
"The results are given as supplementary material.",
"Analysis on Relation Representations This section further provides a visual inspection of the relation embedding space when the constraints are imposed.",
"To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.",
"12 We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure 3 , where for each relation we randomly pick 5 dimensions from both its real and imaginary components.",
"By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well.",
"Pairs of relations from the first class (equivalence) tend to have identical representations r p ≈ r q , those from the second class (inversion) complex conjugate representations r p ≈r q ; and the others representations that Re(r p ) ≤ Re(r q ) and Im(r p ) ≈ Im(r q ).",
"12 Equivalence and inversion are detected using heuristics introduced in § 4.2 (implementation details).",
"See the supplementary material for detailed properties of these three classes.",
"Conclusion This paper investigates the potential of using very simple constraints to improve KG embedding.",
"Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations.",
"Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity.",
"Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines.",
"The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space.",
"Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek.",
"2014"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Our Approach",
"A Basic Embedding Model",
"Non-negativity of Entity Representations",
"Approximate Entailment for Relations",
"The Overall Model",
"Experiments and Results",
"Datasets",
"Link Prediction",
"Analysis on Entity Representations",
"Analysis on Relation Representations",
"Conclusion"
]
} | GEM-SciDuet-train-121#paper-1331#slide-8 | Approximate entailment for relations | : relation approximately entails relation with confidence level
: a person born in a country is very likely, but not necessarily, to have a nationality of that country
Can be derived automatically by modern rule mining systems | : relation approximately entails relation with confidence level
: a person born in a country is very likely, but not necessarily, to have a nationality of that country
Can be derived automatically by modern rule mining systems | [] |
GEM-SciDuet-train-121#paper-1331#slide-9 | 1331 | Improving Knowledge Graph Embedding Using Simple Constraints | Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239
],
"paper_content_text": [
"Introduction The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), e.g., Freebase (Bollacker et al., 2008) , DBpedia (Lehmann et al., 2015) , and Google's Knowledge * Corresponding author: Quan Wang.",
"Vault (Dong et al., 2014) .",
"A typical KG is a multirelational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form (head entity, relation, tail entity).",
"Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks (Wasserman-Pritsker et al., 2015; Hoffmann et al., 2011; Yang and Mitchell, 2017) .",
"Recently, the concept of knowledge graph embedding has been presented and quickly become a hot research topic.",
"The key idea there is to embed components of a KG (i.e., entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG.",
"Early works on this topic learned such vectorial representations (i.e., embeddings) via just simple models developed over KG triples (Bordes et al., 2011 (Bordes et al., , 2013 Jenatton et al., 2012; Nickel et al., 2011) .",
"Recent attempts focused on either designing more complicated triple scoring models (Socher et al., 2013; Bordes et al., 2014; Wang et al., 2014; Lin et al., 2015b; Xiao et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , or incorporating extra information beyond KG triples (Chang et al., 2014; Zhong et al., 2015; Lin et al., 2015a; Neelakantan et al., 2015; Luo et al., 2015b; Xie et al., 2016a,b; Xiao et al., 2017) .",
"See (Wang et al., 2017) for a thorough review.",
"This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task.",
"Specifically, we examine two types of constraints: (i) non-negativity constraints on entity representations and (ii) approximate entailment constraints over relation representations.",
"By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability (Murphy et al., 2012) .",
"By using the latter, we further encode regularities of logical entailment between relations into their distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction (Rocktäschel et al., 2015; Guo et al., 2016) .",
"These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity.",
"Our work has some similarities to those which integrate logical background knowledge into KG embedding (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"Most of such works, however, need grounding of first-order logic rules.",
"The grounding process could be time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"(2016) tried to model rules using only relation representations.",
"But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities.",
"Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create.",
"Minervini et al.",
"(2017b) proposed adversarial training which can integrate first-order logic rules without grounding.",
"But their work, again, focuses on strict, hard rules.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of rules.",
"But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG.",
"Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, i.e., non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities.",
"We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well.",
"Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability.",
"The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space.",
"The remainder of this paper is organized as follows.",
"We first review related work in Section 2, and then detail our approach in Section 3.",
"Experiments and results are reported in Section 4, followed by concluding remarks in Section 5.",
"Related Work Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a.",
"KG embedding.",
"Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, e.g., TransE which takes relations as translating operations between head and tail entities (Bordes et al., 2013) , and RESCAL which models triples through bilinear operations over entity and relation representations (Nickel et al., 2011) .",
"Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, e.g., the TransE extensions (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015) , the RESCAL extensions (Yang et al., 2015; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , and the (deep) neural network models (Socher et al., 2013; Bordes et al., 2014; Shi and Weninger, 2017; Schlichtkrull et al., 2017; Dettmers et al., 2018) ; (ii) those which tried to integrate extra information beyond triples, e.g., entity types Xie et al., 2016b) , relation paths (Neelakantan et al., 2015; Lin et al., 2015a) , and textual descriptions (Xie et al., 2016a; Xiao et al., 2017) .",
"Please refer to (Nickel et al., 2016a; Wang et al., 2017) for a thorough review of these techniques.",
"In this paper, we show the potential of using very simple constraints (i.e., nonnegativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity.",
"A line of research related to ours is KG embedding with logical background knowledge incorporated (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"Both works, however, can only handle strict, hard rules which usually require extensive effort to create.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of background knowledge.",
"But their work considers only equivalence and inversion between relations, which might not always be available in a given KG.",
"Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding.",
"And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs.",
"Non-negativity has long been a subject studied in various research fields.",
"Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability (Lee and Seung, 1999) .",
"In many NLP-related tasks, nonnegativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition (Murphy et al., 2012; Luo et al., 2015a; Fyshe et al., 2015) .",
"In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability.",
"Our Approach This section presents our approach.",
"We first introduce a basic embedding technique to model triples in a given KG ( § 3.1).",
"Then we discuss the nonnegativity constraints over entity representations ( § 3.2) and the approximate entailment constraints over relation representations ( § 3.3) .",
"And finally we present the overall model ( § 3.4).",
"A Basic Embedding Model We choose ComplEx (Trouillon et al., 2016) as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance.",
"Specifically, suppose we are given a KG containing a set of triples O = {(e i , r k , e j )}, with each triple composed of two entities e i , e j ∈ E and their relation r k ∈ R. Here E is the set of entities and R the set of relations.",
"ComplEx then represents each entity e ∈ E as a complex-valued vector e ∈ C d , and each relation r ∈ R a complex-valued vector r ∈ C d , where d is the dimensionality of the embedding space.",
"Each x ∈ C d consists of a real vector component Re(x) and an imaginary vector component Im(x), i.e., x = Re(x) + iIm(x).",
"For any given triple (e i , r k , e j ) ∈ E × R × E, a multilinear dot product is used to score that triple, i.e., φ(e i , r k , e j ) Re( e i , r k ,ē j ) Re( [e i ] [r k ] [ē j ] ), (1) where e i , r k , e j ∈ C d are the vectorial representations associated with e i , r k , e j , respectively;ē j is the conjugate of e j ; [·] is the -th entry of a vector; and Re(·) means taking the real part of a complex value.",
"Triples with higher φ(·, ·, ·) scores are more likely to be true.",
"Owing to the asymmetry of this scoring function, i.e., φ(e i , r k , e j ) = φ(e j , r k , e i ), ComplEx can effectively handle asymmetric relations (Trouillon et al., 2016) .",
"Non-negativity of Entity Representations On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations.",
"In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions.",
"In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation.",
"However, as pointed out by Murphy et al.",
"(2012) , it would be uneconomical to store all negative properties of an entity or a concept.",
"For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication.",
"Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations.",
"To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of [0, 1] d , as approximately Boolean embeddings (Kruszewski et al., 2015) , i.e., 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (2) where e ∈ C d is the representation for entity e ∈ E, with its real and imaginary components denoted by Re(e), Im(e) ∈ R d ; 0 and 1 are d-dimensional vectors with all their entries being 0 or 1; and ≥, ≤ , = denote the entry-wise comparisons throughout the paper whenever applicable.",
"As shown by Lee and Seung (1999) , non-negativity, in most cases, will further induce sparsity and interpretability.",
"Approximate Entailment for Relations Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations.",
"By approximate entailment, we mean an ordered pair of relations that the former approximately entails the latter, e.g., BornInCountry and Nationality, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country.",
"Each such relation pair is associated with a weight to indicate the confidence level of entailment.",
"A larger weight stands for a higher level of confidence.",
"We denote by r p λ − → r q the approximate entailment between relations r p and r q , with confidence level λ.",
"This kind of entailment can be derived automatically from a KG by modern rule mining systems (Galárraga et al., 2015) .",
"Let T denote the set of all such approximate entailments derived beforehand.",
"Before diving into approximate entailment, we first explore the modeling of strict entailment, i.e., entailment with infinite confidence level λ = +∞.",
"The strict entailment r p → r q states that if relation r p holds then relation r q must also hold.",
"This entailment can be roughly modelled by requiring φ(e i , r p , e j ) ≤ φ(e i , r q , e j ), ∀e i , e j ∈ E, (3) where φ(·, ·, ·) is the score for a triple predicted by the embedding model, defined by Eq.",
"(1).",
"Eq.",
"(3) can be interpreted as follows: for any two entities e i and e j , if (e i , r p , e j ) is a true fact with a high score φ(e i , r p , e j ), then the triple (e i , r q , e j ) with an even higher score should also be predicted as a true fact by the embedding model.",
"Note that given the non-negativity constraints defined by Eq.",
"(2), a sufficient condition for Eq.",
"(3) to hold, is to further impose Re(r p ) ≤ Re(r q ), Im(r p ) = Im(r q ), (4) where r p and r q are the complex-valued representations for r p and r q respectively, with the real and imaginary components denoted by Re(·), Im(·) ∈ R d .",
"That means, when the constraints of Eq.",
"(4) (along with those of Eq.",
"(2)) are satisfied, the requirement of Eq.",
"(3) (or in other words r p → r q ) will always hold.",
"We provide a proof of sufficiency as supplementary material.",
"Next we examine the modeling of approximate entailment.",
"To this end, we further introduce the confidence level λ and allow slackness in Eq.",
"(4), which yields λ Re(r p ) − Re(r q ) ≤ α, (5) λ Im(r p ) − Im(r q ) 2 ≤ β.",
"(6) Here α, β ≥ 0 are slack variables, and (·) 2 means an entry-wise operation.",
"Entailments with higher confidence levels show less tolerance for violating the constraints.",
"When λ = +∞, Eqs.",
"(5) -(6) degenerate to Eq.",
"(4).",
"The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible (e i , e j ) entity pairs (i.e., grounding).",
"In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty.",
"The Overall Model Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations.",
"The overall model is presented as follows: min Θ,{α,β} D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T 1 (α + β) + η Θ 2 2 , s.t.",
"λ Re(r p ) − Re(r q ) ≤ α, λ Im(r p ) − Im(r q ) 2 ≤ β, α, β ≥ 0, ∀r p λ − → r q ∈ T , 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E. (7) Here, Θ {e : e ∈ E} ∪ {r : r ∈ R} is the set of all entity and relation representations; D + and D − are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, i.e., (e i , r k , e j ) ∈ O; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, i.e., (e i , r k , e j ) or (e i , r k , e j ); y ijk = ±1 is the label (positive or negative) of triple (e i , r k , e j ).",
"In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels.",
"The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient µ ≥ 0.",
"The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied.",
"The last term is L 2 regularization to avoid over-fitting, and η ≥ 0 is the regularization coefficient.",
"To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are.",
"As such, the optimization problem of Eq.",
"(7) can be rewritten as: min Θ D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T λ1 Re(r p )−Re(r q ) + + µ T λ1 Im(r p )−Im(r q ) 2 + η Θ 2 2 , s.t.",
"0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (8) where [x] + = max(0, x) with max(·, ·) being an entry-wise operation.",
"The equivalence between Eq.",
"(7) and Eq.",
"(8) is shown in the supplementary material.",
"We use SGD in mini-batch mode as our optimizer, with AdaGrad (Duchi et al., 2011) to tune the learning rate.",
"After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of [0, 1] d , to satisfy the non-negativity constraints.",
"While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity.",
"Our approach has a space complexity of O(nd + md), which is the same as that of ComplEx.",
"Here, n is the number of entities, m the number of relations, and O(nd + md) to store a d-dimensional complex-valued vector for each entity and each relation.",
"The time complexity (per iteration) of our approach is O(sd+td+nd), where s is the average number of triples in a mini-batch,n the average number of entities in a mini-batch, and t the total number of approximate entailments in T .",
"O(sd) is to handle triples in a mini-batch, O(td) penalty terms introduced by the approximate entailments, and O(nd) further the non-negativity constraints on entity representations.",
"Usually there are much fewer entailments than triples, i.e., t s, and alsō n ≤ 2s.",
"1 So the time complexity of our approach is on a par with O(sd), i.e., the time complexity of ComplEx.",
"Experiments and Results This section presents our experiments and results.",
"We first introduce the datasets used in our experiments ( § 4.1).",
"Then we empirically evaluate our approach in the link prediction task ( § 4.2).",
"After that, we conduct extensive analysis on both entity representations ( § 4.3) and relation representations ( § 4.4) to show the interpretability of our model.",
"1 There will be at most 2s entities contained in s triples.",
"Code and data used in the experiments are available at https://github.com/iieir-km/ ComplEx-NNE_AER.",
"Datasets The first two datasets we used are WN18 and F-B15K, released by Bordes et al.",
"(2013) .",
"2 WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities.",
"We create our third dataset from the mapping-based objects of core DBpedia.",
"3 We eliminate relations not included within the DBpedia ontology such as HomePage and Logo, and discard entities appearing less than 20 times.",
"The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities.",
"Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively.",
"We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K.",
"We further use AMIE+ (Galárraga et al., 2015) 4 to extract approximate entailments automatically from the training set of each dataset.",
"As suggested by Guo et al.",
"(2018) , we consider entailments with PCA confidence higher than 0.8.",
"5 As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K.",
"Table 1 gives some examples of these approximate entailments, along with their confidence levels.",
"Table 2 further summarizes the statistics of the datasets.",
"Link Prediction We first evaluate our approach in the link prediction task, which aims to predict a triple (e i , r k , e j ) with e i or e j missing, i.e., predict e i given (r k , e j ) or predict e j given (e i , r k ).",
"Evaluation Protocol: We follow the protocol introduced by Bordes et al.",
"(2013) .",
"For each test triple (e i , r k , e j ), we replace its head entity e i with every entity e i ∈ E, and calculate a score for the corrupted triple (e i , r k , e j ), e.g., φ(e i , r k , e j ) defined by Eq.",
"(1).",
"Then we sort these scores in de- scending order, and get the rank of the correct entity e i .",
"During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, i.e., the filtered setting as described in (Bordes et al., 2013) .",
"This whole procedure is repeated while replacing the tail entity e j .",
"We report on the test set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (HITS@N), with n = 1, 3, 10.",
"Comparison Settings: We compare the performance of our approach against a variety of KG embedding models developed in recent years.",
"These models can be categorized into three groups: • Simple embedding models that utilize triples alone without integrating extra information, including TransE (Bordes et al., 2013) , Dist-Mult (Yang et al., 2015) , HolE (Nickel et al., 2016b) , ComplEx (Trouillon et al., 2016) , and ANALOGY (Liu et al., 2017) .",
"Our approach is developed on the basis of ComplEx.",
"• Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE (Guo et al., 2018) and ComplEx R (Minervini et al., 2017a) .",
"The former requires grounding of first-order logic rules.",
"The latter is restricted to relation equiv-alence and inversion, and assigns an identical confidence level to all different rules.",
"• Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and F-B15K, including R-GCN (Schlichtkrull et al., 2017) , ConvE (Dettmers et al., 2018) , and Single DistMult (Kadlec et al., 2017) .",
"6 The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models.",
"The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time.",
"We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the Non-Negativity constraints on Entity representations, i.e., optimization Eq.",
"(8) with µ = 0; and (ii) ComplEx-NNE+AER that further imposes the Approximate Entailment constraints over Relation representations besides those non-negativity ones, i.e., optimization Eq.",
"(8) with µ > 0.",
"Implementation Details: We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K.",
"We directly report their original results on these two datasets to avoid re-implementation bias.",
"On DB100K, the newly created dataset, we take the first two groups of baselines, i.e., those simple embedding models and ComplEx extensions with logical background knowledge incorporated.",
"We do not use the third group of baselines due to efficiency and complexity issues.",
"We use the code provided by Trouillon et al.",
"(2016) 7 for TransE, DistMult, and ComplEx, and the code released by their authors for ANAL-OGY 8 and RUGE 9 .",
"We re-implement HolE and ComplEx R so that all the baselines (as well as our approach) share the same optimization mode, i.e., SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.",
"10 We follow Trouillon et al.",
"(2016) to adopt a ranking loss for TransE and a logistic loss for all the other methods.",
"(Trouillon et al., 2016) .",
"Results for the other baselines are taken from the original papers.",
"Missing scores not reported in the literature are indicated by \"-\".",
"Best scores are highlighted in bold, and \" * \" indicates statistically significant improvements over ComplEx.",
"Table 4 : Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by \" * \".",
"Among those baselines, RUGE and ComplEx R require additional logical background knowledge.",
"RUGE makes use of soft rules, which are extracted by AMIE+ from the training sets.",
"As suggested by Guo et al.",
"(2018) , length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized.",
"Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8.",
"But it only considers entailments between a pair of relations, i.e., length-1 rules.",
"ComplEx R takes into account equivalence and inversion between relations.",
"We derive such axioms directly from our approximate entailments.",
"If r p λ 1 − → r q and r q λ 2 − → r p with λ 1 , λ 2 > 0.8, we think relations r p and r q are equivalent.",
"And similarly, if r −1 p λ 1 − → r q and r −1 q λ 2 − → r p with λ 1 , λ 2 > 0.8, we consider r p as an inverse of r q .",
"For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set.",
"Specifically, we tune the embedding size d ∈ {100, 150, 200}, the L 2 regularization coefficient η ∈ {0.001, 0.003, 0.01, 0.03, 0.1}, the ratio of negative over positive training examples α ∈ {2, 10}, and the initial learning rate γ ∈ {0.01, 0.05, 0.1, 0.5, 1.0}.",
"For TransE, we tune the margin of the ranking loss δ ∈ {0.1, 0.2, 0.5, 1, 2, 5, 10}.",
"Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors (Liu et al., 2017; Guo et al., 2018) .",
"After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER (µ in Eq.",
"(8)) in the range of {10 −5 , 10 −4 , · · · , 10 4 , 10 5 }, with all its other hyperparameters fixed to their optimal configurations.",
"We then directly set µ = 0 to get the optimal ComplEx-NNE model.",
"The weight of soft constraints in ComplEx R is tuned in the same range as µ.",
"The optimal configurations for our approach are: d = 200, η = 0.03, α = 10, γ = 1.0, µ = 10 on WN18; d = 200, η = 0.01, α = 10, γ = 0.5, µ = 10 −3 on FB15K; and d = 150, η = 0.03, α = 10, γ = 0.1, µ = 10 −5 on DB100K.",
"Experimental Results: Table 3 presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from previous literature.",
"Table 4 further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting.",
"On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test.",
"The reciprocal rank or HITS@N value with n = 1, 3, 10 for each test triple is used as paired data.",
"The symbol \" * \" indicates a significance level of p < 0.05.",
"The results demonstrate that imposing the nonnegativity and approximate entailment constraints indeed improves KG embedding.",
"ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18).",
"More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplEx R , and even the complicated developments or implementations like ConvE or Single DistMult.",
"This demonstrates the superiority of our approach.",
"Analysis on Entity Representations This section inspects how the structure of the entity embedding space changes when the constraints are imposed.",
"We first provide the visualization of entity representations on DB100K.",
"On this dataset each entity is associated with a single type label.",
"11 We pick 4 types reptile, wine region, species, and programming language, and randomly select 30 entities from each type.",
"Figure 1 visualizes with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated.",
"Then we investigate the semantic purity of these dimensions.",
"Specifically, we collect the representations of all the entities on DB100K (real components only).",
"For each dimension of these representations, top K percent of entities with the highest activation values on this dimension are picked.",
"We can calculate the entropy of the type distribution of the entities selected.",
"This entropy reflects diversity of entity types, or in other words, semantic purity.",
"If all the K percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity).",
"On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity).",
"Figure 2 AER, as K varies.",
"We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity.",
"We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena.",
"The results are given as supplementary material.",
"Analysis on Relation Representations This section further provides a visual inspection of the relation embedding space when the constraints are imposed.",
"To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.",
"12 We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure 3 , where for each relation we randomly pick 5 dimensions from both its real and imaginary components.",
"By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well.",
"Pairs of relations from the first class (equivalence) tend to have identical representations r p ≈ r q , those from the second class (inversion) complex conjugate representations r p ≈r q ; and the others representations that Re(r p ) ≤ Re(r q ) and Im(r p ) ≈ Im(r q ).",
"12 Equivalence and inversion are detected using heuristics introduced in § 4.2 (implementation details).",
"See the supplementary material for detailed properties of these three classes.",
"Conclusion This paper investigates the potential of using very simple constraints to improve KG embedding.",
"Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations.",
"Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity.",
"Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines.",
"The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space.",
"Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek.",
"2014"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Our Approach",
"A Basic Embedding Model",
"Non-negativity of Entity Representations",
"Approximate Entailment for Relations",
"The Overall Model",
"Experiments and Results",
"Datasets",
"Link Prediction",
"Analysis on Entity Representations",
"Analysis on Relation Representations",
"Conclusion"
]
} | GEM-SciDuet-train-121#paper-1331#slide-9 | Approximate entailment for relations cont | A sufficient condition for
Introducing confidence and allowing slackness in
A higher confidence level shows less tolerance for violating the constraints | A sufficient condition for
Introducing confidence and allowing slackness in
A higher confidence level shows less tolerance for violating the constraints | [] |
GEM-SciDuet-train-121#paper-1331#slide-10 | 1331 | Improving Knowledge Graph Embedding Using Simple Constraints | Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239
],
"paper_content_text": [
"Introduction The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), e.g., Freebase (Bollacker et al., 2008) , DBpedia (Lehmann et al., 2015) , and Google's Knowledge * Corresponding author: Quan Wang.",
"Vault (Dong et al., 2014) .",
"A typical KG is a multirelational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form (head entity, relation, tail entity).",
"Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks (Wasserman-Pritsker et al., 2015; Hoffmann et al., 2011; Yang and Mitchell, 2017) .",
"Recently, the concept of knowledge graph embedding has been presented and quickly become a hot research topic.",
"The key idea there is to embed components of a KG (i.e., entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG.",
"Early works on this topic learned such vectorial representations (i.e., embeddings) via just simple models developed over KG triples (Bordes et al., 2011 (Bordes et al., , 2013 Jenatton et al., 2012; Nickel et al., 2011) .",
"Recent attempts focused on either designing more complicated triple scoring models (Socher et al., 2013; Bordes et al., 2014; Wang et al., 2014; Lin et al., 2015b; Xiao et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , or incorporating extra information beyond KG triples (Chang et al., 2014; Zhong et al., 2015; Lin et al., 2015a; Neelakantan et al., 2015; Luo et al., 2015b; Xie et al., 2016a,b; Xiao et al., 2017) .",
"See (Wang et al., 2017) for a thorough review.",
"This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task.",
"Specifically, we examine two types of constraints: (i) non-negativity constraints on entity representations and (ii) approximate entailment constraints over relation representations.",
"By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability (Murphy et al., 2012) .",
"By using the latter, we further encode regularities of logical entailment between relations into their distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction (Rocktäschel et al., 2015; Guo et al., 2016) .",
"These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity.",
"Our work has some similarities to those which integrate logical background knowledge into KG embedding (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"Most of such works, however, need grounding of first-order logic rules.",
"The grounding process could be time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"(2016) tried to model rules using only relation representations.",
"But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities.",
"Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create.",
"Minervini et al.",
"(2017b) proposed adversarial training which can integrate first-order logic rules without grounding.",
"But their work, again, focuses on strict, hard rules.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of rules.",
"But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG.",
"Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, i.e., non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities.",
"We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well.",
"Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability.",
"The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space.",
"The remainder of this paper is organized as follows.",
"We first review related work in Section 2, and then detail our approach in Section 3.",
"Experiments and results are reported in Section 4, followed by concluding remarks in Section 5.",
"Related Work Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a.",
"KG embedding.",
"Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, e.g., TransE which takes relations as translating operations between head and tail entities (Bordes et al., 2013) , and RESCAL which models triples through bilinear operations over entity and relation representations (Nickel et al., 2011) .",
"Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, e.g., the TransE extensions (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015) , the RESCAL extensions (Yang et al., 2015; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , and the (deep) neural network models (Socher et al., 2013; Bordes et al., 2014; Shi and Weninger, 2017; Schlichtkrull et al., 2017; Dettmers et al., 2018) ; (ii) those which tried to integrate extra information beyond triples, e.g., entity types Xie et al., 2016b) , relation paths (Neelakantan et al., 2015; Lin et al., 2015a) , and textual descriptions (Xie et al., 2016a; Xiao et al., 2017) .",
"Please refer to (Nickel et al., 2016a; Wang et al., 2017) for a thorough review of these techniques.",
"In this paper, we show the potential of using very simple constraints (i.e., nonnegativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity.",
"A line of research related to ours is KG embedding with logical background knowledge incorporated (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"Both works, however, can only handle strict, hard rules which usually require extensive effort to create.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of background knowledge.",
"But their work considers only equivalence and inversion between relations, which might not always be available in a given KG.",
"Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding.",
"And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs.",
"Non-negativity has long been a subject studied in various research fields.",
"Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability (Lee and Seung, 1999) .",
"In many NLP-related tasks, nonnegativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition (Murphy et al., 2012; Luo et al., 2015a; Fyshe et al., 2015) .",
"In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability.",
"Our Approach This section presents our approach.",
"We first introduce a basic embedding technique to model triples in a given KG ( § 3.1).",
"Then we discuss the nonnegativity constraints over entity representations ( § 3.2) and the approximate entailment constraints over relation representations ( § 3.3) .",
"And finally we present the overall model ( § 3.4).",
"A Basic Embedding Model We choose ComplEx (Trouillon et al., 2016) as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance.",
"Specifically, suppose we are given a KG containing a set of triples O = {(e i , r k , e j )}, with each triple composed of two entities e i , e j ∈ E and their relation r k ∈ R. Here E is the set of entities and R the set of relations.",
"ComplEx then represents each entity e ∈ E as a complex-valued vector e ∈ C d , and each relation r ∈ R a complex-valued vector r ∈ C d , where d is the dimensionality of the embedding space.",
"Each x ∈ C d consists of a real vector component Re(x) and an imaginary vector component Im(x), i.e., x = Re(x) + iIm(x).",
"For any given triple (e i , r k , e j ) ∈ E × R × E, a multilinear dot product is used to score that triple, i.e., φ(e i , r k , e j ) Re( e i , r k ,ē j ) Re( [e i ] [r k ] [ē j ] ), (1) where e i , r k , e j ∈ C d are the vectorial representations associated with e i , r k , e j , respectively;ē j is the conjugate of e j ; [·] is the -th entry of a vector; and Re(·) means taking the real part of a complex value.",
"Triples with higher φ(·, ·, ·) scores are more likely to be true.",
"Owing to the asymmetry of this scoring function, i.e., φ(e i , r k , e j ) = φ(e j , r k , e i ), ComplEx can effectively handle asymmetric relations (Trouillon et al., 2016) .",
"Non-negativity of Entity Representations On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations.",
"In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions.",
"In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation.",
"However, as pointed out by Murphy et al.",
"(2012) , it would be uneconomical to store all negative properties of an entity or a concept.",
"For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication.",
"Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations.",
"To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of [0, 1] d , as approximately Boolean embeddings (Kruszewski et al., 2015) , i.e., 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (2) where e ∈ C d is the representation for entity e ∈ E, with its real and imaginary components denoted by Re(e), Im(e) ∈ R d ; 0 and 1 are d-dimensional vectors with all their entries being 0 or 1; and ≥, ≤ , = denote the entry-wise comparisons throughout the paper whenever applicable.",
"As shown by Lee and Seung (1999) , non-negativity, in most cases, will further induce sparsity and interpretability.",
"Approximate Entailment for Relations Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations.",
"By approximate entailment, we mean an ordered pair of relations that the former approximately entails the latter, e.g., BornInCountry and Nationality, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country.",
"Each such relation pair is associated with a weight to indicate the confidence level of entailment.",
"A larger weight stands for a higher level of confidence.",
"We denote by r p λ − → r q the approximate entailment between relations r p and r q , with confidence level λ.",
"This kind of entailment can be derived automatically from a KG by modern rule mining systems (Galárraga et al., 2015) .",
"Let T denote the set of all such approximate entailments derived beforehand.",
"Before diving into approximate entailment, we first explore the modeling of strict entailment, i.e., entailment with infinite confidence level λ = +∞.",
"The strict entailment r p → r q states that if relation r p holds then relation r q must also hold.",
"This entailment can be roughly modelled by requiring φ(e i , r p , e j ) ≤ φ(e i , r q , e j ), ∀e i , e j ∈ E, (3) where φ(·, ·, ·) is the score for a triple predicted by the embedding model, defined by Eq.",
"(1).",
"Eq.",
"(3) can be interpreted as follows: for any two entities e i and e j , if (e i , r p , e j ) is a true fact with a high score φ(e i , r p , e j ), then the triple (e i , r q , e j ) with an even higher score should also be predicted as a true fact by the embedding model.",
"Note that given the non-negativity constraints defined by Eq.",
"(2), a sufficient condition for Eq.",
"(3) to hold, is to further impose Re(r p ) ≤ Re(r q ), Im(r p ) = Im(r q ), (4) where r p and r q are the complex-valued representations for r p and r q respectively, with the real and imaginary components denoted by Re(·), Im(·) ∈ R d .",
"That means, when the constraints of Eq.",
"(4) (along with those of Eq.",
"(2)) are satisfied, the requirement of Eq.",
"(3) (or in other words r p → r q ) will always hold.",
"We provide a proof of sufficiency as supplementary material.",
"Next we examine the modeling of approximate entailment.",
"To this end, we further introduce the confidence level λ and allow slackness in Eq.",
"(4), which yields λ Re(r p ) − Re(r q ) ≤ α, (5) λ Im(r p ) − Im(r q ) 2 ≤ β.",
"(6) Here α, β ≥ 0 are slack variables, and (·) 2 means an entry-wise operation.",
"Entailments with higher confidence levels show less tolerance for violating the constraints.",
"When λ = +∞, Eqs.",
"(5) -(6) degenerate to Eq.",
"(4).",
"The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible (e i , e j ) entity pairs (i.e., grounding).",
"In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty.",
"The Overall Model Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations.",
"The overall model is presented as follows: min Θ,{α,β} D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T 1 (α + β) + η Θ 2 2 , s.t.",
"λ Re(r p ) − Re(r q ) ≤ α, λ Im(r p ) − Im(r q ) 2 ≤ β, α, β ≥ 0, ∀r p λ − → r q ∈ T , 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E. (7) Here, Θ {e : e ∈ E} ∪ {r : r ∈ R} is the set of all entity and relation representations; D + and D − are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, i.e., (e i , r k , e j ) ∈ O; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, i.e., (e i , r k , e j ) or (e i , r k , e j ); y ijk = ±1 is the label (positive or negative) of triple (e i , r k , e j ).",
"In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels.",
"The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient µ ≥ 0.",
"The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied.",
"The last term is L 2 regularization to avoid over-fitting, and η ≥ 0 is the regularization coefficient.",
"To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are.",
"As such, the optimization problem of Eq.",
"(7) can be rewritten as: min Θ D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T λ1 Re(r p )−Re(r q ) + + µ T λ1 Im(r p )−Im(r q ) 2 + η Θ 2 2 , s.t.",
"0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (8) where [x] + = max(0, x) with max(·, ·) being an entry-wise operation.",
"The equivalence between Eq.",
"(7) and Eq.",
"(8) is shown in the supplementary material.",
"We use SGD in mini-batch mode as our optimizer, with AdaGrad (Duchi et al., 2011) to tune the learning rate.",
"After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of [0, 1] d , to satisfy the non-negativity constraints.",
"While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity.",
"Our approach has a space complexity of O(nd + md), which is the same as that of ComplEx.",
"Here, n is the number of entities, m the number of relations, and O(nd + md) to store a d-dimensional complex-valued vector for each entity and each relation.",
"The time complexity (per iteration) of our approach is O(sd+td+nd), where s is the average number of triples in a mini-batch,n the average number of entities in a mini-batch, and t the total number of approximate entailments in T .",
"O(sd) is to handle triples in a mini-batch, O(td) penalty terms introduced by the approximate entailments, and O(nd) further the non-negativity constraints on entity representations.",
"Usually there are much fewer entailments than triples, i.e., t s, and alsō n ≤ 2s.",
"1 So the time complexity of our approach is on a par with O(sd), i.e., the time complexity of ComplEx.",
"Experiments and Results This section presents our experiments and results.",
"We first introduce the datasets used in our experiments ( § 4.1).",
"Then we empirically evaluate our approach in the link prediction task ( § 4.2).",
"After that, we conduct extensive analysis on both entity representations ( § 4.3) and relation representations ( § 4.4) to show the interpretability of our model.",
"1 There will be at most 2s entities contained in s triples.",
"Code and data used in the experiments are available at https://github.com/iieir-km/ ComplEx-NNE_AER.",
"Datasets The first two datasets we used are WN18 and F-B15K, released by Bordes et al.",
"(2013) .",
"2 WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities.",
"We create our third dataset from the mapping-based objects of core DBpedia.",
"3 We eliminate relations not included within the DBpedia ontology such as HomePage and Logo, and discard entities appearing less than 20 times.",
"The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities.",
"Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively.",
"We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K.",
"We further use AMIE+ (Galárraga et al., 2015) 4 to extract approximate entailments automatically from the training set of each dataset.",
"As suggested by Guo et al.",
"(2018) , we consider entailments with PCA confidence higher than 0.8.",
"5 As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K.",
"Table 1 gives some examples of these approximate entailments, along with their confidence levels.",
"Table 2 further summarizes the statistics of the datasets.",
"Link Prediction We first evaluate our approach in the link prediction task, which aims to predict a triple (e i , r k , e j ) with e i or e j missing, i.e., predict e i given (r k , e j ) or predict e j given (e i , r k ).",
"Evaluation Protocol: We follow the protocol introduced by Bordes et al.",
"(2013) .",
"For each test triple (e i , r k , e j ), we replace its head entity e i with every entity e i ∈ E, and calculate a score for the corrupted triple (e i , r k , e j ), e.g., φ(e i , r k , e j ) defined by Eq.",
"(1).",
"Then we sort these scores in de- scending order, and get the rank of the correct entity e i .",
"During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, i.e., the filtered setting as described in (Bordes et al., 2013) .",
"This whole procedure is repeated while replacing the tail entity e j .",
"We report on the test set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (HITS@N), with n = 1, 3, 10.",
"Comparison Settings: We compare the performance of our approach against a variety of KG embedding models developed in recent years.",
"These models can be categorized into three groups: • Simple embedding models that utilize triples alone without integrating extra information, including TransE (Bordes et al., 2013) , Dist-Mult (Yang et al., 2015) , HolE (Nickel et al., 2016b) , ComplEx (Trouillon et al., 2016) , and ANALOGY (Liu et al., 2017) .",
"Our approach is developed on the basis of ComplEx.",
"• Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE (Guo et al., 2018) and ComplEx R (Minervini et al., 2017a) .",
"The former requires grounding of first-order logic rules.",
"The latter is restricted to relation equiv-alence and inversion, and assigns an identical confidence level to all different rules.",
"• Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and F-B15K, including R-GCN (Schlichtkrull et al., 2017) , ConvE (Dettmers et al., 2018) , and Single DistMult (Kadlec et al., 2017) .",
"6 The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models.",
"The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time.",
"We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the Non-Negativity constraints on Entity representations, i.e., optimization Eq.",
"(8) with µ = 0; and (ii) ComplEx-NNE+AER that further imposes the Approximate Entailment constraints over Relation representations besides those non-negativity ones, i.e., optimization Eq.",
"(8) with µ > 0.",
"Implementation Details: We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K.",
"We directly report their original results on these two datasets to avoid re-implementation bias.",
"On DB100K, the newly created dataset, we take the first two groups of baselines, i.e., those simple embedding models and ComplEx extensions with logical background knowledge incorporated.",
"We do not use the third group of baselines due to efficiency and complexity issues.",
"We use the code provided by Trouillon et al.",
"(2016) 7 for TransE, DistMult, and ComplEx, and the code released by their authors for ANAL-OGY 8 and RUGE 9 .",
"We re-implement HolE and ComplEx R so that all the baselines (as well as our approach) share the same optimization mode, i.e., SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.",
"10 We follow Trouillon et al.",
"(2016) to adopt a ranking loss for TransE and a logistic loss for all the other methods.",
"(Trouillon et al., 2016) .",
"Results for the other baselines are taken from the original papers.",
"Missing scores not reported in the literature are indicated by \"-\".",
"Best scores are highlighted in bold, and \" * \" indicates statistically significant improvements over ComplEx.",
"Table 4 : Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by \" * \".",
"Among those baselines, RUGE and ComplEx R require additional logical background knowledge.",
"RUGE makes use of soft rules, which are extracted by AMIE+ from the training sets.",
"As suggested by Guo et al.",
"(2018) , length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized.",
"Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8.",
"But it only considers entailments between a pair of relations, i.e., length-1 rules.",
"ComplEx R takes into account equivalence and inversion between relations.",
"We derive such axioms directly from our approximate entailments.",
"If r p λ 1 − → r q and r q λ 2 − → r p with λ 1 , λ 2 > 0.8, we think relations r p and r q are equivalent.",
"And similarly, if r −1 p λ 1 − → r q and r −1 q λ 2 − → r p with λ 1 , λ 2 > 0.8, we consider r p as an inverse of r q .",
"For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set.",
"Specifically, we tune the embedding size d ∈ {100, 150, 200}, the L 2 regularization coefficient η ∈ {0.001, 0.003, 0.01, 0.03, 0.1}, the ratio of negative over positive training examples α ∈ {2, 10}, and the initial learning rate γ ∈ {0.01, 0.05, 0.1, 0.5, 1.0}.",
"For TransE, we tune the margin of the ranking loss δ ∈ {0.1, 0.2, 0.5, 1, 2, 5, 10}.",
"Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors (Liu et al., 2017; Guo et al., 2018) .",
"After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER (µ in Eq.",
"(8)) in the range of {10 −5 , 10 −4 , · · · , 10 4 , 10 5 }, with all its other hyperparameters fixed to their optimal configurations.",
"We then directly set µ = 0 to get the optimal ComplEx-NNE model.",
"The weight of soft constraints in ComplEx R is tuned in the same range as µ.",
"The optimal configurations for our approach are: d = 200, η = 0.03, α = 10, γ = 1.0, µ = 10 on WN18; d = 200, η = 0.01, α = 10, γ = 0.5, µ = 10 −3 on FB15K; and d = 150, η = 0.03, α = 10, γ = 0.1, µ = 10 −5 on DB100K.",
"Experimental Results: Table 3 presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from previous literature.",
"Table 4 further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting.",
"On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test.",
"The reciprocal rank or HITS@N value with n = 1, 3, 10 for each test triple is used as paired data.",
"The symbol \" * \" indicates a significance level of p < 0.05.",
"The results demonstrate that imposing the nonnegativity and approximate entailment constraints indeed improves KG embedding.",
"ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18).",
"More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplEx R , and even the complicated developments or implementations like ConvE or Single DistMult.",
"This demonstrates the superiority of our approach.",
"Analysis on Entity Representations This section inspects how the structure of the entity embedding space changes when the constraints are imposed.",
"We first provide the visualization of entity representations on DB100K.",
"On this dataset each entity is associated with a single type label.",
"11 We pick 4 types reptile, wine region, species, and programming language, and randomly select 30 entities from each type.",
"Figure 1 visualizes with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated.",
"Then we investigate the semantic purity of these dimensions.",
"Specifically, we collect the representations of all the entities on DB100K (real components only).",
"For each dimension of these representations, top K percent of entities with the highest activation values on this dimension are picked.",
"We can calculate the entropy of the type distribution of the entities selected.",
"This entropy reflects diversity of entity types, or in other words, semantic purity.",
"If all the K percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity).",
"On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity).",
"Figure 2 AER, as K varies.",
"We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity.",
"We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena.",
"The results are given as supplementary material.",
"Analysis on Relation Representations This section further provides a visual inspection of the relation embedding space when the constraints are imposed.",
"To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.",
"12 We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure 3 , where for each relation we randomly pick 5 dimensions from both its real and imaginary components.",
"By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well.",
"Pairs of relations from the first class (equivalence) tend to have identical representations r p ≈ r q , those from the second class (inversion) complex conjugate representations r p ≈r q ; and the others representations that Re(r p ) ≤ Re(r q ) and Im(r p ) ≈ Im(r q ).",
"12 Equivalence and inversion are detected using heuristics introduced in § 4.2 (implementation details).",
"See the supplementary material for detailed properties of these three classes.",
"Conclusion This paper investigates the potential of using very simple constraints to improve KG embedding.",
"Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations.",
"Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity.",
"Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines.",
"The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space.",
"Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek.",
"2014"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Our Approach",
"A Basic Embedding Model",
"Non-negativity of Entity Representations",
"Approximate Entailment for Relations",
"The Overall Model",
"Experiments and Results",
"Datasets",
"Link Prediction",
"Analysis on Entity Representations",
"Analysis on Relation Representations",
"Conclusion"
]
} | GEM-SciDuet-train-121#paper-1331#slide-10 | Overall model | Basic embedding model of ComplEx + non-negativity constraints + approximate entailment constraints
logistic loss for ComplEx
approximate entailment constraints on relation representations
non-negativity constraints on entity representations | Basic embedding model of ComplEx + non-negativity constraints + approximate entailment constraints
logistic loss for ComplEx
approximate entailment constraints on relation representations
non-negativity constraints on entity representations | [] |
GEM-SciDuet-train-121#paper-1331#slide-11 | 1331 | Improving Knowledge Graph Embedding Using Simple Constraints | Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239
],
"paper_content_text": [
"Introduction The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), e.g., Freebase (Bollacker et al., 2008) , DBpedia (Lehmann et al., 2015) , and Google's Knowledge * Corresponding author: Quan Wang.",
"Vault (Dong et al., 2014) .",
"A typical KG is a multirelational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form (head entity, relation, tail entity).",
"Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks (Wasserman-Pritsker et al., 2015; Hoffmann et al., 2011; Yang and Mitchell, 2017) .",
"Recently, the concept of knowledge graph embedding has been presented and quickly become a hot research topic.",
"The key idea there is to embed components of a KG (i.e., entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG.",
"Early works on this topic learned such vectorial representations (i.e., embeddings) via just simple models developed over KG triples (Bordes et al., 2011 (Bordes et al., , 2013 Jenatton et al., 2012; Nickel et al., 2011) .",
"Recent attempts focused on either designing more complicated triple scoring models (Socher et al., 2013; Bordes et al., 2014; Wang et al., 2014; Lin et al., 2015b; Xiao et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , or incorporating extra information beyond KG triples (Chang et al., 2014; Zhong et al., 2015; Lin et al., 2015a; Neelakantan et al., 2015; Luo et al., 2015b; Xie et al., 2016a,b; Xiao et al., 2017) .",
"See (Wang et al., 2017) for a thorough review.",
"This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task.",
"Specifically, we examine two types of constraints: (i) non-negativity constraints on entity representations and (ii) approximate entailment constraints over relation representations.",
"By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability (Murphy et al., 2012) .",
"By using the latter, we further encode regularities of logical entailment between relations into their distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction (Rocktäschel et al., 2015; Guo et al., 2016) .",
"These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity.",
"Our work has some similarities to those which integrate logical background knowledge into KG embedding (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"Most of such works, however, need grounding of first-order logic rules.",
"The grounding process could be time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"(2016) tried to model rules using only relation representations.",
"But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities.",
"Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create.",
"Minervini et al.",
"(2017b) proposed adversarial training which can integrate first-order logic rules without grounding.",
"But their work, again, focuses on strict, hard rules.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of rules.",
"But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG.",
"Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, i.e., non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities.",
"We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well.",
"Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability.",
"The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space.",
"The remainder of this paper is organized as follows.",
"We first review related work in Section 2, and then detail our approach in Section 3.",
"Experiments and results are reported in Section 4, followed by concluding remarks in Section 5.",
"Related Work Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a.",
"KG embedding.",
"Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, e.g., TransE which takes relations as translating operations between head and tail entities (Bordes et al., 2013) , and RESCAL which models triples through bilinear operations over entity and relation representations (Nickel et al., 2011) .",
"Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, e.g., the TransE extensions (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015) , the RESCAL extensions (Yang et al., 2015; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , and the (deep) neural network models (Socher et al., 2013; Bordes et al., 2014; Shi and Weninger, 2017; Schlichtkrull et al., 2017; Dettmers et al., 2018) ; (ii) those which tried to integrate extra information beyond triples, e.g., entity types Xie et al., 2016b) , relation paths (Neelakantan et al., 2015; Lin et al., 2015a) , and textual descriptions (Xie et al., 2016a; Xiao et al., 2017) .",
"Please refer to (Nickel et al., 2016a; Wang et al., 2017) for a thorough review of these techniques.",
"In this paper, we show the potential of using very simple constraints (i.e., nonnegativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity.",
"A line of research related to ours is KG embedding with logical background knowledge incorporated (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"Both works, however, can only handle strict, hard rules which usually require extensive effort to create.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of background knowledge.",
"But their work considers only equivalence and inversion between relations, which might not always be available in a given KG.",
"Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding.",
"And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs.",
"Non-negativity has long been a subject studied in various research fields.",
"Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability (Lee and Seung, 1999) .",
"In many NLP-related tasks, nonnegativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition (Murphy et al., 2012; Luo et al., 2015a; Fyshe et al., 2015) .",
"In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability.",
"Our Approach This section presents our approach.",
"We first introduce a basic embedding technique to model triples in a given KG ( § 3.1).",
"Then we discuss the nonnegativity constraints over entity representations ( § 3.2) and the approximate entailment constraints over relation representations ( § 3.3) .",
"And finally we present the overall model ( § 3.4).",
"A Basic Embedding Model We choose ComplEx (Trouillon et al., 2016) as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance.",
"Specifically, suppose we are given a KG containing a set of triples O = {(e i , r k , e j )}, with each triple composed of two entities e i , e j ∈ E and their relation r k ∈ R. Here E is the set of entities and R the set of relations.",
"ComplEx then represents each entity e ∈ E as a complex-valued vector e ∈ C d , and each relation r ∈ R a complex-valued vector r ∈ C d , where d is the dimensionality of the embedding space.",
"Each x ∈ C d consists of a real vector component Re(x) and an imaginary vector component Im(x), i.e., x = Re(x) + iIm(x).",
"For any given triple (e i , r k , e j ) ∈ E × R × E, a multilinear dot product is used to score that triple, i.e., φ(e i , r k , e j ) Re( e i , r k ,ē j ) Re( [e i ] [r k ] [ē j ] ), (1) where e i , r k , e j ∈ C d are the vectorial representations associated with e i , r k , e j , respectively;ē j is the conjugate of e j ; [·] is the -th entry of a vector; and Re(·) means taking the real part of a complex value.",
"Triples with higher φ(·, ·, ·) scores are more likely to be true.",
"Owing to the asymmetry of this scoring function, i.e., φ(e i , r k , e j ) = φ(e j , r k , e i ), ComplEx can effectively handle asymmetric relations (Trouillon et al., 2016) .",
"Non-negativity of Entity Representations On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations.",
"In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions.",
"In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation.",
"However, as pointed out by Murphy et al.",
"(2012) , it would be uneconomical to store all negative properties of an entity or a concept.",
"For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication.",
"Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations.",
"To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of [0, 1] d , as approximately Boolean embeddings (Kruszewski et al., 2015) , i.e., 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (2) where e ∈ C d is the representation for entity e ∈ E, with its real and imaginary components denoted by Re(e), Im(e) ∈ R d ; 0 and 1 are d-dimensional vectors with all their entries being 0 or 1; and ≥, ≤ , = denote the entry-wise comparisons throughout the paper whenever applicable.",
"As shown by Lee and Seung (1999) , non-negativity, in most cases, will further induce sparsity and interpretability.",
"Approximate Entailment for Relations Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations.",
"By approximate entailment, we mean an ordered pair of relations that the former approximately entails the latter, e.g., BornInCountry and Nationality, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country.",
"Each such relation pair is associated with a weight to indicate the confidence level of entailment.",
"A larger weight stands for a higher level of confidence.",
"We denote by r p λ − → r q the approximate entailment between relations r p and r q , with confidence level λ.",
"This kind of entailment can be derived automatically from a KG by modern rule mining systems (Galárraga et al., 2015) .",
"Let T denote the set of all such approximate entailments derived beforehand.",
"Before diving into approximate entailment, we first explore the modeling of strict entailment, i.e., entailment with infinite confidence level λ = +∞.",
"The strict entailment r p → r q states that if relation r p holds then relation r q must also hold.",
"This entailment can be roughly modelled by requiring φ(e i , r p , e j ) ≤ φ(e i , r q , e j ), ∀e i , e j ∈ E, (3) where φ(·, ·, ·) is the score for a triple predicted by the embedding model, defined by Eq.",
"(1).",
"Eq.",
"(3) can be interpreted as follows: for any two entities e i and e j , if (e i , r p , e j ) is a true fact with a high score φ(e i , r p , e j ), then the triple (e i , r q , e j ) with an even higher score should also be predicted as a true fact by the embedding model.",
"Note that given the non-negativity constraints defined by Eq.",
"(2), a sufficient condition for Eq.",
"(3) to hold, is to further impose Re(r p ) ≤ Re(r q ), Im(r p ) = Im(r q ), (4) where r p and r q are the complex-valued representations for r p and r q respectively, with the real and imaginary components denoted by Re(·), Im(·) ∈ R d .",
"That means, when the constraints of Eq.",
"(4) (along with those of Eq.",
"(2)) are satisfied, the requirement of Eq.",
"(3) (or in other words r p → r q ) will always hold.",
"We provide a proof of sufficiency as supplementary material.",
"Next we examine the modeling of approximate entailment.",
"To this end, we further introduce the confidence level λ and allow slackness in Eq.",
"(4), which yields λ Re(r p ) − Re(r q ) ≤ α, (5) λ Im(r p ) − Im(r q ) 2 ≤ β.",
"(6) Here α, β ≥ 0 are slack variables, and (·) 2 means an entry-wise operation.",
"Entailments with higher confidence levels show less tolerance for violating the constraints.",
"When λ = +∞, Eqs.",
"(5) -(6) degenerate to Eq.",
"(4).",
"The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible (e i , e j ) entity pairs (i.e., grounding).",
"In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty.",
"The Overall Model Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations.",
"The overall model is presented as follows: min Θ,{α,β} D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T 1 (α + β) + η Θ 2 2 , s.t.",
"λ Re(r p ) − Re(r q ) ≤ α, λ Im(r p ) − Im(r q ) 2 ≤ β, α, β ≥ 0, ∀r p λ − → r q ∈ T , 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E. (7) Here, Θ {e : e ∈ E} ∪ {r : r ∈ R} is the set of all entity and relation representations; D + and D − are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, i.e., (e i , r k , e j ) ∈ O; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, i.e., (e i , r k , e j ) or (e i , r k , e j ); y ijk = ±1 is the label (positive or negative) of triple (e i , r k , e j ).",
"In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels.",
"The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient µ ≥ 0.",
"The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied.",
"The last term is L 2 regularization to avoid over-fitting, and η ≥ 0 is the regularization coefficient.",
"To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are.",
"As such, the optimization problem of Eq.",
"(7) can be rewritten as: min Θ D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T λ1 Re(r p )−Re(r q ) + + µ T λ1 Im(r p )−Im(r q ) 2 + η Θ 2 2 , s.t.",
"0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (8) where [x] + = max(0, x) with max(·, ·) being an entry-wise operation.",
"The equivalence between Eq.",
"(7) and Eq.",
"(8) is shown in the supplementary material.",
"We use SGD in mini-batch mode as our optimizer, with AdaGrad (Duchi et al., 2011) to tune the learning rate.",
"After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of [0, 1] d , to satisfy the non-negativity constraints.",
"While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity.",
"Our approach has a space complexity of O(nd + md), which is the same as that of ComplEx.",
"Here, n is the number of entities, m the number of relations, and O(nd + md) to store a d-dimensional complex-valued vector for each entity and each relation.",
"The time complexity (per iteration) of our approach is O(sd+td+nd), where s is the average number of triples in a mini-batch,n the average number of entities in a mini-batch, and t the total number of approximate entailments in T .",
"O(sd) is to handle triples in a mini-batch, O(td) penalty terms introduced by the approximate entailments, and O(nd) further the non-negativity constraints on entity representations.",
"Usually there are much fewer entailments than triples, i.e., t s, and alsō n ≤ 2s.",
"1 So the time complexity of our approach is on a par with O(sd), i.e., the time complexity of ComplEx.",
"Experiments and Results This section presents our experiments and results.",
"We first introduce the datasets used in our experiments ( § 4.1).",
"Then we empirically evaluate our approach in the link prediction task ( § 4.2).",
"After that, we conduct extensive analysis on both entity representations ( § 4.3) and relation representations ( § 4.4) to show the interpretability of our model.",
"1 There will be at most 2s entities contained in s triples.",
"Code and data used in the experiments are available at https://github.com/iieir-km/ ComplEx-NNE_AER.",
"Datasets The first two datasets we used are WN18 and F-B15K, released by Bordes et al.",
"(2013) .",
"2 WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities.",
"We create our third dataset from the mapping-based objects of core DBpedia.",
"3 We eliminate relations not included within the DBpedia ontology such as HomePage and Logo, and discard entities appearing less than 20 times.",
"The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities.",
"Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively.",
"We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K.",
"We further use AMIE+ (Galárraga et al., 2015) 4 to extract approximate entailments automatically from the training set of each dataset.",
"As suggested by Guo et al.",
"(2018) , we consider entailments with PCA confidence higher than 0.8.",
"5 As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K.",
"Table 1 gives some examples of these approximate entailments, along with their confidence levels.",
"Table 2 further summarizes the statistics of the datasets.",
"Link Prediction We first evaluate our approach in the link prediction task, which aims to predict a triple (e i , r k , e j ) with e i or e j missing, i.e., predict e i given (r k , e j ) or predict e j given (e i , r k ).",
"Evaluation Protocol: We follow the protocol introduced by Bordes et al.",
"(2013) .",
"For each test triple (e i , r k , e j ), we replace its head entity e i with every entity e i ∈ E, and calculate a score for the corrupted triple (e i , r k , e j ), e.g., φ(e i , r k , e j ) defined by Eq.",
"(1).",
"Then we sort these scores in de- scending order, and get the rank of the correct entity e i .",
"During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, i.e., the filtered setting as described in (Bordes et al., 2013) .",
"This whole procedure is repeated while replacing the tail entity e j .",
"We report on the test set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (HITS@N), with n = 1, 3, 10.",
"Comparison Settings: We compare the performance of our approach against a variety of KG embedding models developed in recent years.",
"These models can be categorized into three groups: • Simple embedding models that utilize triples alone without integrating extra information, including TransE (Bordes et al., 2013) , Dist-Mult (Yang et al., 2015) , HolE (Nickel et al., 2016b) , ComplEx (Trouillon et al., 2016) , and ANALOGY (Liu et al., 2017) .",
"Our approach is developed on the basis of ComplEx.",
"• Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE (Guo et al., 2018) and ComplEx R (Minervini et al., 2017a) .",
"The former requires grounding of first-order logic rules.",
"The latter is restricted to relation equiv-alence and inversion, and assigns an identical confidence level to all different rules.",
"• Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and F-B15K, including R-GCN (Schlichtkrull et al., 2017) , ConvE (Dettmers et al., 2018) , and Single DistMult (Kadlec et al., 2017) .",
"6 The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models.",
"The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time.",
"We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the Non-Negativity constraints on Entity representations, i.e., optimization Eq.",
"(8) with µ = 0; and (ii) ComplEx-NNE+AER that further imposes the Approximate Entailment constraints over Relation representations besides those non-negativity ones, i.e., optimization Eq.",
"(8) with µ > 0.",
"Implementation Details: We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K.",
"We directly report their original results on these two datasets to avoid re-implementation bias.",
"On DB100K, the newly created dataset, we take the first two groups of baselines, i.e., those simple embedding models and ComplEx extensions with logical background knowledge incorporated.",
"We do not use the third group of baselines due to efficiency and complexity issues.",
"We use the code provided by Trouillon et al.",
"(2016) 7 for TransE, DistMult, and ComplEx, and the code released by their authors for ANAL-OGY 8 and RUGE 9 .",
"We re-implement HolE and ComplEx R so that all the baselines (as well as our approach) share the same optimization mode, i.e., SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.",
"10 We follow Trouillon et al.",
"(2016) to adopt a ranking loss for TransE and a logistic loss for all the other methods.",
"(Trouillon et al., 2016) .",
"Results for the other baselines are taken from the original papers.",
"Missing scores not reported in the literature are indicated by \"-\".",
"Best scores are highlighted in bold, and \" * \" indicates statistically significant improvements over ComplEx.",
"Table 4 : Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by \" * \".",
"Among those baselines, RUGE and ComplEx R require additional logical background knowledge.",
"RUGE makes use of soft rules, which are extracted by AMIE+ from the training sets.",
"As suggested by Guo et al.",
"(2018) , length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized.",
"Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8.",
"But it only considers entailments between a pair of relations, i.e., length-1 rules.",
"ComplEx R takes into account equivalence and inversion between relations.",
"We derive such axioms directly from our approximate entailments.",
"If r p λ 1 − → r q and r q λ 2 − → r p with λ 1 , λ 2 > 0.8, we think relations r p and r q are equivalent.",
"And similarly, if r −1 p λ 1 − → r q and r −1 q λ 2 − → r p with λ 1 , λ 2 > 0.8, we consider r p as an inverse of r q .",
"For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set.",
"Specifically, we tune the embedding size d ∈ {100, 150, 200}, the L 2 regularization coefficient η ∈ {0.001, 0.003, 0.01, 0.03, 0.1}, the ratio of negative over positive training examples α ∈ {2, 10}, and the initial learning rate γ ∈ {0.01, 0.05, 0.1, 0.5, 1.0}.",
"For TransE, we tune the margin of the ranking loss δ ∈ {0.1, 0.2, 0.5, 1, 2, 5, 10}.",
"Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors (Liu et al., 2017; Guo et al., 2018) .",
"After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER (µ in Eq.",
"(8)) in the range of {10 −5 , 10 −4 , · · · , 10 4 , 10 5 }, with all its other hyperparameters fixed to their optimal configurations.",
"We then directly set µ = 0 to get the optimal ComplEx-NNE model.",
"The weight of soft constraints in ComplEx R is tuned in the same range as µ.",
"The optimal configurations for our approach are: d = 200, η = 0.03, α = 10, γ = 1.0, µ = 10 on WN18; d = 200, η = 0.01, α = 10, γ = 0.5, µ = 10 −3 on FB15K; and d = 150, η = 0.03, α = 10, γ = 0.1, µ = 10 −5 on DB100K.",
"Experimental Results: Table 3 presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from previous literature.",
"Table 4 further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting.",
"On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test.",
"The reciprocal rank or HITS@N value with n = 1, 3, 10 for each test triple is used as paired data.",
"The symbol \" * \" indicates a significance level of p < 0.05.",
"The results demonstrate that imposing the nonnegativity and approximate entailment constraints indeed improves KG embedding.",
"ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18).",
"More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplEx R , and even the complicated developments or implementations like ConvE or Single DistMult.",
"This demonstrates the superiority of our approach.",
"Analysis on Entity Representations This section inspects how the structure of the entity embedding space changes when the constraints are imposed.",
"We first provide the visualization of entity representations on DB100K.",
"On this dataset each entity is associated with a single type label.",
"11 We pick 4 types reptile, wine region, species, and programming language, and randomly select 30 entities from each type.",
"Figure 1 visualizes with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated.",
"Then we investigate the semantic purity of these dimensions.",
"Specifically, we collect the representations of all the entities on DB100K (real components only).",
"For each dimension of these representations, top K percent of entities with the highest activation values on this dimension are picked.",
"We can calculate the entropy of the type distribution of the entities selected.",
"This entropy reflects diversity of entity types, or in other words, semantic purity.",
"If all the K percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity).",
"On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity).",
"Figure 2 AER, as K varies.",
"We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity.",
"We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena.",
"The results are given as supplementary material.",
"Analysis on Relation Representations This section further provides a visual inspection of the relation embedding space when the constraints are imposed.",
"To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.",
"12 We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure 3 , where for each relation we randomly pick 5 dimensions from both its real and imaginary components.",
"By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well.",
"Pairs of relations from the first class (equivalence) tend to have identical representations r p ≈ r q , those from the second class (inversion) complex conjugate representations r p ≈r q ; and the others representations that Re(r p ) ≤ Re(r q ) and Im(r p ) ≈ Im(r q ).",
"12 Equivalence and inversion are detected using heuristics introduced in § 4.2 (implementation details).",
"See the supplementary material for detailed properties of these three classes.",
"Conclusion This paper investigates the potential of using very simple constraints to improve KG embedding.",
"Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations.",
"Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity.",
"Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines.",
"The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space.",
"Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek.",
"2014"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Our Approach",
"A Basic Embedding Model",
"Non-negativity of Entity Representations",
"Approximate Entailment for Relations",
"The Overall Model",
"Experiments and Results",
"Datasets",
"Link Prediction",
"Analysis on Entity Representations",
"Analysis on Relation Representations",
"Conclusion"
]
} | GEM-SciDuet-train-121#paper-1331#slide-11 | Complexity analysis | Space complexity: the same as that of ComplEx
is the number of entities
is the number of relations
is the dimensionality of the embedding space
Time complexity per iteration:
is the average number of entities in a mini-batch
is the total number of approximate entailments | Space complexity: the same as that of ComplEx
is the number of entities
is the number of relations
is the dimensionality of the embedding space
Time complexity per iteration:
is the average number of entities in a mini-batch
is the total number of approximate entailments | [] |
GEM-SciDuet-train-121#paper-1331#slide-12 | 1331 | Improving Knowledge Graph Embedding Using Simple Constraints | Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239
],
"paper_content_text": [
"Introduction The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), e.g., Freebase (Bollacker et al., 2008) , DBpedia (Lehmann et al., 2015) , and Google's Knowledge * Corresponding author: Quan Wang.",
"Vault (Dong et al., 2014) .",
"A typical KG is a multirelational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form (head entity, relation, tail entity).",
"Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks (Wasserman-Pritsker et al., 2015; Hoffmann et al., 2011; Yang and Mitchell, 2017) .",
"Recently, the concept of knowledge graph embedding has been presented and quickly become a hot research topic.",
"The key idea there is to embed components of a KG (i.e., entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG.",
"Early works on this topic learned such vectorial representations (i.e., embeddings) via just simple models developed over KG triples (Bordes et al., 2011 (Bordes et al., , 2013 Jenatton et al., 2012; Nickel et al., 2011) .",
"Recent attempts focused on either designing more complicated triple scoring models (Socher et al., 2013; Bordes et al., 2014; Wang et al., 2014; Lin et al., 2015b; Xiao et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , or incorporating extra information beyond KG triples (Chang et al., 2014; Zhong et al., 2015; Lin et al., 2015a; Neelakantan et al., 2015; Luo et al., 2015b; Xie et al., 2016a,b; Xiao et al., 2017) .",
"See (Wang et al., 2017) for a thorough review.",
"This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task.",
"Specifically, we examine two types of constraints: (i) non-negativity constraints on entity representations and (ii) approximate entailment constraints over relation representations.",
"By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability (Murphy et al., 2012) .",
"By using the latter, we further encode regularities of logical entailment between relations into their distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction (Rocktäschel et al., 2015; Guo et al., 2016) .",
"These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity.",
"Our work has some similarities to those which integrate logical background knowledge into KG embedding (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"Most of such works, however, need grounding of first-order logic rules.",
"The grounding process could be time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"(2016) tried to model rules using only relation representations.",
"But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities.",
"Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create.",
"Minervini et al.",
"(2017b) proposed adversarial training which can integrate first-order logic rules without grounding.",
"But their work, again, focuses on strict, hard rules.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of rules.",
"But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG.",
"Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, i.e., non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities.",
"We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well.",
"Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability.",
"The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space.",
"The remainder of this paper is organized as follows.",
"We first review related work in Section 2, and then detail our approach in Section 3.",
"Experiments and results are reported in Section 4, followed by concluding remarks in Section 5.",
"Related Work Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a.",
"KG embedding.",
"Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, e.g., TransE which takes relations as translating operations between head and tail entities (Bordes et al., 2013) , and RESCAL which models triples through bilinear operations over entity and relation representations (Nickel et al., 2011) .",
"Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, e.g., the TransE extensions (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015) , the RESCAL extensions (Yang et al., 2015; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , and the (deep) neural network models (Socher et al., 2013; Bordes et al., 2014; Shi and Weninger, 2017; Schlichtkrull et al., 2017; Dettmers et al., 2018) ; (ii) those which tried to integrate extra information beyond triples, e.g., entity types Xie et al., 2016b) , relation paths (Neelakantan et al., 2015; Lin et al., 2015a) , and textual descriptions (Xie et al., 2016a; Xiao et al., 2017) .",
"Please refer to (Nickel et al., 2016a; Wang et al., 2017) for a thorough review of these techniques.",
"In this paper, we show the potential of using very simple constraints (i.e., nonnegativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity.",
"A line of research related to ours is KG embedding with logical background knowledge incorporated (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"Both works, however, can only handle strict, hard rules which usually require extensive effort to create.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of background knowledge.",
"But their work considers only equivalence and inversion between relations, which might not always be available in a given KG.",
"Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding.",
"And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs.",
"Non-negativity has long been a subject studied in various research fields.",
"Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability (Lee and Seung, 1999) .",
"In many NLP-related tasks, nonnegativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition (Murphy et al., 2012; Luo et al., 2015a; Fyshe et al., 2015) .",
"In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability.",
"Our Approach This section presents our approach.",
"We first introduce a basic embedding technique to model triples in a given KG ( § 3.1).",
"Then we discuss the nonnegativity constraints over entity representations ( § 3.2) and the approximate entailment constraints over relation representations ( § 3.3) .",
"And finally we present the overall model ( § 3.4).",
"A Basic Embedding Model We choose ComplEx (Trouillon et al., 2016) as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance.",
"Specifically, suppose we are given a KG containing a set of triples O = {(e i , r k , e j )}, with each triple composed of two entities e i , e j ∈ E and their relation r k ∈ R. Here E is the set of entities and R the set of relations.",
"ComplEx then represents each entity e ∈ E as a complex-valued vector e ∈ C d , and each relation r ∈ R a complex-valued vector r ∈ C d , where d is the dimensionality of the embedding space.",
"Each x ∈ C d consists of a real vector component Re(x) and an imaginary vector component Im(x), i.e., x = Re(x) + iIm(x).",
"For any given triple (e i , r k , e j ) ∈ E × R × E, a multilinear dot product is used to score that triple, i.e., φ(e i , r k , e j ) Re( e i , r k ,ē j ) Re( [e i ] [r k ] [ē j ] ), (1) where e i , r k , e j ∈ C d are the vectorial representations associated with e i , r k , e j , respectively;ē j is the conjugate of e j ; [·] is the -th entry of a vector; and Re(·) means taking the real part of a complex value.",
"Triples with higher φ(·, ·, ·) scores are more likely to be true.",
"Owing to the asymmetry of this scoring function, i.e., φ(e i , r k , e j ) = φ(e j , r k , e i ), ComplEx can effectively handle asymmetric relations (Trouillon et al., 2016) .",
"Non-negativity of Entity Representations On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations.",
"In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions.",
"In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation.",
"However, as pointed out by Murphy et al.",
"(2012) , it would be uneconomical to store all negative properties of an entity or a concept.",
"For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication.",
"Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations.",
"To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of [0, 1] d , as approximately Boolean embeddings (Kruszewski et al., 2015) , i.e., 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (2) where e ∈ C d is the representation for entity e ∈ E, with its real and imaginary components denoted by Re(e), Im(e) ∈ R d ; 0 and 1 are d-dimensional vectors with all their entries being 0 or 1; and ≥, ≤ , = denote the entry-wise comparisons throughout the paper whenever applicable.",
"As shown by Lee and Seung (1999) , non-negativity, in most cases, will further induce sparsity and interpretability.",
"Approximate Entailment for Relations Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations.",
"By approximate entailment, we mean an ordered pair of relations that the former approximately entails the latter, e.g., BornInCountry and Nationality, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country.",
"Each such relation pair is associated with a weight to indicate the confidence level of entailment.",
"A larger weight stands for a higher level of confidence.",
"We denote by r p λ − → r q the approximate entailment between relations r p and r q , with confidence level λ.",
"This kind of entailment can be derived automatically from a KG by modern rule mining systems (Galárraga et al., 2015) .",
"Let T denote the set of all such approximate entailments derived beforehand.",
"Before diving into approximate entailment, we first explore the modeling of strict entailment, i.e., entailment with infinite confidence level λ = +∞.",
"The strict entailment r p → r q states that if relation r p holds then relation r q must also hold.",
"This entailment can be roughly modelled by requiring φ(e i , r p , e j ) ≤ φ(e i , r q , e j ), ∀e i , e j ∈ E, (3) where φ(·, ·, ·) is the score for a triple predicted by the embedding model, defined by Eq.",
"(1).",
"Eq.",
"(3) can be interpreted as follows: for any two entities e i and e j , if (e i , r p , e j ) is a true fact with a high score φ(e i , r p , e j ), then the triple (e i , r q , e j ) with an even higher score should also be predicted as a true fact by the embedding model.",
"Note that given the non-negativity constraints defined by Eq.",
"(2), a sufficient condition for Eq.",
"(3) to hold, is to further impose Re(r p ) ≤ Re(r q ), Im(r p ) = Im(r q ), (4) where r p and r q are the complex-valued representations for r p and r q respectively, with the real and imaginary components denoted by Re(·), Im(·) ∈ R d .",
"That means, when the constraints of Eq.",
"(4) (along with those of Eq.",
"(2)) are satisfied, the requirement of Eq.",
"(3) (or in other words r p → r q ) will always hold.",
"We provide a proof of sufficiency as supplementary material.",
"Next we examine the modeling of approximate entailment.",
"To this end, we further introduce the confidence level λ and allow slackness in Eq.",
"(4), which yields λ Re(r p ) − Re(r q ) ≤ α, (5) λ Im(r p ) − Im(r q ) 2 ≤ β.",
"(6) Here α, β ≥ 0 are slack variables, and (·) 2 means an entry-wise operation.",
"Entailments with higher confidence levels show less tolerance for violating the constraints.",
"When λ = +∞, Eqs.",
"(5) -(6) degenerate to Eq.",
"(4).",
"The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible (e i , e j ) entity pairs (i.e., grounding).",
"In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty.",
"The Overall Model Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations.",
"The overall model is presented as follows: min Θ,{α,β} D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T 1 (α + β) + η Θ 2 2 , s.t.",
"λ Re(r p ) − Re(r q ) ≤ α, λ Im(r p ) − Im(r q ) 2 ≤ β, α, β ≥ 0, ∀r p λ − → r q ∈ T , 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E. (7) Here, Θ {e : e ∈ E} ∪ {r : r ∈ R} is the set of all entity and relation representations; D + and D − are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, i.e., (e i , r k , e j ) ∈ O; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, i.e., (e i , r k , e j ) or (e i , r k , e j ); y ijk = ±1 is the label (positive or negative) of triple (e i , r k , e j ).",
"In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels.",
"The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient µ ≥ 0.",
"The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied.",
"The last term is L 2 regularization to avoid over-fitting, and η ≥ 0 is the regularization coefficient.",
"To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are.",
"As such, the optimization problem of Eq.",
"(7) can be rewritten as: min Θ D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T λ1 Re(r p )−Re(r q ) + + µ T λ1 Im(r p )−Im(r q ) 2 + η Θ 2 2 , s.t.",
"0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (8) where [x] + = max(0, x) with max(·, ·) being an entry-wise operation.",
"The equivalence between Eq.",
"(7) and Eq.",
"(8) is shown in the supplementary material.",
"We use SGD in mini-batch mode as our optimizer, with AdaGrad (Duchi et al., 2011) to tune the learning rate.",
"After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of [0, 1] d , to satisfy the non-negativity constraints.",
"While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity.",
"Our approach has a space complexity of O(nd + md), which is the same as that of ComplEx.",
"Here, n is the number of entities, m the number of relations, and O(nd + md) to store a d-dimensional complex-valued vector for each entity and each relation.",
"The time complexity (per iteration) of our approach is O(sd+td+nd), where s is the average number of triples in a mini-batch,n the average number of entities in a mini-batch, and t the total number of approximate entailments in T .",
"O(sd) is to handle triples in a mini-batch, O(td) penalty terms introduced by the approximate entailments, and O(nd) further the non-negativity constraints on entity representations.",
"Usually there are much fewer entailments than triples, i.e., t s, and alsō n ≤ 2s.",
"1 So the time complexity of our approach is on a par with O(sd), i.e., the time complexity of ComplEx.",
"Experiments and Results This section presents our experiments and results.",
"We first introduce the datasets used in our experiments ( § 4.1).",
"Then we empirically evaluate our approach in the link prediction task ( § 4.2).",
"After that, we conduct extensive analysis on both entity representations ( § 4.3) and relation representations ( § 4.4) to show the interpretability of our model.",
"1 There will be at most 2s entities contained in s triples.",
"Code and data used in the experiments are available at https://github.com/iieir-km/ ComplEx-NNE_AER.",
"Datasets The first two datasets we used are WN18 and F-B15K, released by Bordes et al.",
"(2013) .",
"2 WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities.",
"We create our third dataset from the mapping-based objects of core DBpedia.",
"3 We eliminate relations not included within the DBpedia ontology such as HomePage and Logo, and discard entities appearing less than 20 times.",
"The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities.",
"Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively.",
"We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K.",
"We further use AMIE+ (Galárraga et al., 2015) 4 to extract approximate entailments automatically from the training set of each dataset.",
"As suggested by Guo et al.",
"(2018) , we consider entailments with PCA confidence higher than 0.8.",
"5 As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K.",
"Table 1 gives some examples of these approximate entailments, along with their confidence levels.",
"Table 2 further summarizes the statistics of the datasets.",
"Link Prediction We first evaluate our approach in the link prediction task, which aims to predict a triple (e i , r k , e j ) with e i or e j missing, i.e., predict e i given (r k , e j ) or predict e j given (e i , r k ).",
"Evaluation Protocol: We follow the protocol introduced by Bordes et al.",
"(2013) .",
"For each test triple (e i , r k , e j ), we replace its head entity e i with every entity e i ∈ E, and calculate a score for the corrupted triple (e i , r k , e j ), e.g., φ(e i , r k , e j ) defined by Eq.",
"(1).",
"Then we sort these scores in de- scending order, and get the rank of the correct entity e i .",
"During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, i.e., the filtered setting as described in (Bordes et al., 2013) .",
"This whole procedure is repeated while replacing the tail entity e j .",
"We report on the test set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (HITS@N), with n = 1, 3, 10.",
"Comparison Settings: We compare the performance of our approach against a variety of KG embedding models developed in recent years.",
"These models can be categorized into three groups: • Simple embedding models that utilize triples alone without integrating extra information, including TransE (Bordes et al., 2013) , Dist-Mult (Yang et al., 2015) , HolE (Nickel et al., 2016b) , ComplEx (Trouillon et al., 2016) , and ANALOGY (Liu et al., 2017) .",
"Our approach is developed on the basis of ComplEx.",
"• Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE (Guo et al., 2018) and ComplEx R (Minervini et al., 2017a) .",
"The former requires grounding of first-order logic rules.",
"The latter is restricted to relation equiv-alence and inversion, and assigns an identical confidence level to all different rules.",
"• Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and F-B15K, including R-GCN (Schlichtkrull et al., 2017) , ConvE (Dettmers et al., 2018) , and Single DistMult (Kadlec et al., 2017) .",
"6 The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models.",
"The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time.",
"We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the Non-Negativity constraints on Entity representations, i.e., optimization Eq.",
"(8) with µ = 0; and (ii) ComplEx-NNE+AER that further imposes the Approximate Entailment constraints over Relation representations besides those non-negativity ones, i.e., optimization Eq.",
"(8) with µ > 0.",
"Implementation Details: We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K.",
"We directly report their original results on these two datasets to avoid re-implementation bias.",
"On DB100K, the newly created dataset, we take the first two groups of baselines, i.e., those simple embedding models and ComplEx extensions with logical background knowledge incorporated.",
"We do not use the third group of baselines due to efficiency and complexity issues.",
"We use the code provided by Trouillon et al.",
"(2016) 7 for TransE, DistMult, and ComplEx, and the code released by their authors for ANAL-OGY 8 and RUGE 9 .",
"We re-implement HolE and ComplEx R so that all the baselines (as well as our approach) share the same optimization mode, i.e., SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.",
"10 We follow Trouillon et al.",
"(2016) to adopt a ranking loss for TransE and a logistic loss for all the other methods.",
"(Trouillon et al., 2016) .",
"Results for the other baselines are taken from the original papers.",
"Missing scores not reported in the literature are indicated by \"-\".",
"Best scores are highlighted in bold, and \" * \" indicates statistically significant improvements over ComplEx.",
"Table 4 : Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by \" * \".",
"Among those baselines, RUGE and ComplEx R require additional logical background knowledge.",
"RUGE makes use of soft rules, which are extracted by AMIE+ from the training sets.",
"As suggested by Guo et al.",
"(2018) , length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized.",
"Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8.",
"But it only considers entailments between a pair of relations, i.e., length-1 rules.",
"ComplEx R takes into account equivalence and inversion between relations.",
"We derive such axioms directly from our approximate entailments.",
"If r p λ 1 − → r q and r q λ 2 − → r p with λ 1 , λ 2 > 0.8, we think relations r p and r q are equivalent.",
"And similarly, if r −1 p λ 1 − → r q and r −1 q λ 2 − → r p with λ 1 , λ 2 > 0.8, we consider r p as an inverse of r q .",
"For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set.",
"Specifically, we tune the embedding size d ∈ {100, 150, 200}, the L 2 regularization coefficient η ∈ {0.001, 0.003, 0.01, 0.03, 0.1}, the ratio of negative over positive training examples α ∈ {2, 10}, and the initial learning rate γ ∈ {0.01, 0.05, 0.1, 0.5, 1.0}.",
"For TransE, we tune the margin of the ranking loss δ ∈ {0.1, 0.2, 0.5, 1, 2, 5, 10}.",
"Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors (Liu et al., 2017; Guo et al., 2018) .",
"After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER (µ in Eq.",
"(8)) in the range of {10 −5 , 10 −4 , · · · , 10 4 , 10 5 }, with all its other hyperparameters fixed to their optimal configurations.",
"We then directly set µ = 0 to get the optimal ComplEx-NNE model.",
"The weight of soft constraints in ComplEx R is tuned in the same range as µ.",
"The optimal configurations for our approach are: d = 200, η = 0.03, α = 10, γ = 1.0, µ = 10 on WN18; d = 200, η = 0.01, α = 10, γ = 0.5, µ = 10 −3 on FB15K; and d = 150, η = 0.03, α = 10, γ = 0.1, µ = 10 −5 on DB100K.",
"Experimental Results: Table 3 presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from previous literature.",
"Table 4 further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting.",
"On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test.",
"The reciprocal rank or HITS@N value with n = 1, 3, 10 for each test triple is used as paired data.",
"The symbol \" * \" indicates a significance level of p < 0.05.",
"The results demonstrate that imposing the nonnegativity and approximate entailment constraints indeed improves KG embedding.",
"ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18).",
"More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplEx R , and even the complicated developments or implementations like ConvE or Single DistMult.",
"This demonstrates the superiority of our approach.",
"Analysis on Entity Representations This section inspects how the structure of the entity embedding space changes when the constraints are imposed.",
"We first provide the visualization of entity representations on DB100K.",
"On this dataset each entity is associated with a single type label.",
"11 We pick 4 types reptile, wine region, species, and programming language, and randomly select 30 entities from each type.",
"Figure 1 visualizes with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated.",
"Then we investigate the semantic purity of these dimensions.",
"Specifically, we collect the representations of all the entities on DB100K (real components only).",
"For each dimension of these representations, top K percent of entities with the highest activation values on this dimension are picked.",
"We can calculate the entropy of the type distribution of the entities selected.",
"This entropy reflects diversity of entity types, or in other words, semantic purity.",
"If all the K percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity).",
"On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity).",
"Figure 2 AER, as K varies.",
"We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity.",
"We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena.",
"The results are given as supplementary material.",
"Analysis on Relation Representations This section further provides a visual inspection of the relation embedding space when the constraints are imposed.",
"To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.",
"12 We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure 3 , where for each relation we randomly pick 5 dimensions from both its real and imaginary components.",
"By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well.",
"Pairs of relations from the first class (equivalence) tend to have identical representations r p ≈ r q , those from the second class (inversion) complex conjugate representations r p ≈r q ; and the others representations that Re(r p ) ≤ Re(r q ) and Im(r p ) ≈ Im(r q ).",
"12 Equivalence and inversion are detected using heuristics introduced in § 4.2 (implementation details).",
"See the supplementary material for detailed properties of these three classes.",
"Conclusion This paper investigates the potential of using very simple constraints to improve KG embedding.",
"Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations.",
"Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity.",
"Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines.",
"The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space.",
"Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek.",
"2014"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Our Approach",
"A Basic Embedding Model",
"Non-negativity of Entity Representations",
"Approximate Entailment for Relations",
"The Overall Model",
"Experiments and Results",
"Datasets",
"Link Prediction",
"Analysis on Entity Representations",
"Analysis on Relation Representations",
"Conclusion"
]
} | GEM-SciDuet-train-121#paper-1331#slide-12 | Experimental setups | WN18: subset of WordNet
FB15k: subset of Freebase
DB100k: subset of DBpedia
AMIE+ with confidence level higher than 0.8 | WN18: subset of WordNet
FB15k: subset of Freebase
DB100k: subset of DBpedia
AMIE+ with confidence level higher than 0.8 | [] |
GEM-SciDuet-train-121#paper-1331#slide-13 | 1331 | Improving Knowledge Graph Embedding Using Simple Constraints | Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239
],
"paper_content_text": [
"Introduction The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), e.g., Freebase (Bollacker et al., 2008) , DBpedia (Lehmann et al., 2015) , and Google's Knowledge * Corresponding author: Quan Wang.",
"Vault (Dong et al., 2014) .",
"A typical KG is a multirelational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form (head entity, relation, tail entity).",
"Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks (Wasserman-Pritsker et al., 2015; Hoffmann et al., 2011; Yang and Mitchell, 2017) .",
"Recently, the concept of knowledge graph embedding has been presented and quickly become a hot research topic.",
"The key idea there is to embed components of a KG (i.e., entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG.",
"Early works on this topic learned such vectorial representations (i.e., embeddings) via just simple models developed over KG triples (Bordes et al., 2011 (Bordes et al., , 2013 Jenatton et al., 2012; Nickel et al., 2011) .",
"Recent attempts focused on either designing more complicated triple scoring models (Socher et al., 2013; Bordes et al., 2014; Wang et al., 2014; Lin et al., 2015b; Xiao et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , or incorporating extra information beyond KG triples (Chang et al., 2014; Zhong et al., 2015; Lin et al., 2015a; Neelakantan et al., 2015; Luo et al., 2015b; Xie et al., 2016a,b; Xiao et al., 2017) .",
"See (Wang et al., 2017) for a thorough review.",
"This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task.",
"Specifically, we examine two types of constraints: (i) non-negativity constraints on entity representations and (ii) approximate entailment constraints over relation representations.",
"By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability (Murphy et al., 2012) .",
"By using the latter, we further encode regularities of logical entailment between relations into their distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction (Rocktäschel et al., 2015; Guo et al., 2016) .",
"These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity.",
"Our work has some similarities to those which integrate logical background knowledge into KG embedding (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"Most of such works, however, need grounding of first-order logic rules.",
"The grounding process could be time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"(2016) tried to model rules using only relation representations.",
"But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities.",
"Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create.",
"Minervini et al.",
"(2017b) proposed adversarial training which can integrate first-order logic rules without grounding.",
"But their work, again, focuses on strict, hard rules.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of rules.",
"But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG.",
"Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, i.e., non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities.",
"We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well.",
"Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability.",
"The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space.",
"The remainder of this paper is organized as follows.",
"We first review related work in Section 2, and then detail our approach in Section 3.",
"Experiments and results are reported in Section 4, followed by concluding remarks in Section 5.",
"Related Work Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a.",
"KG embedding.",
"Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, e.g., TransE which takes relations as translating operations between head and tail entities (Bordes et al., 2013) , and RESCAL which models triples through bilinear operations over entity and relation representations (Nickel et al., 2011) .",
"Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, e.g., the TransE extensions (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015) , the RESCAL extensions (Yang et al., 2015; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , and the (deep) neural network models (Socher et al., 2013; Bordes et al., 2014; Shi and Weninger, 2017; Schlichtkrull et al., 2017; Dettmers et al., 2018) ; (ii) those which tried to integrate extra information beyond triples, e.g., entity types Xie et al., 2016b) , relation paths (Neelakantan et al., 2015; Lin et al., 2015a) , and textual descriptions (Xie et al., 2016a; Xiao et al., 2017) .",
"Please refer to (Nickel et al., 2016a; Wang et al., 2017) for a thorough review of these techniques.",
"In this paper, we show the potential of using very simple constraints (i.e., nonnegativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity.",
"A line of research related to ours is KG embedding with logical background knowledge incorporated (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"Both works, however, can only handle strict, hard rules which usually require extensive effort to create.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of background knowledge.",
"But their work considers only equivalence and inversion between relations, which might not always be available in a given KG.",
"Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding.",
"And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs.",
"Non-negativity has long been a subject studied in various research fields.",
"Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability (Lee and Seung, 1999) .",
"In many NLP-related tasks, nonnegativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition (Murphy et al., 2012; Luo et al., 2015a; Fyshe et al., 2015) .",
"In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability.",
"Our Approach This section presents our approach.",
"We first introduce a basic embedding technique to model triples in a given KG ( § 3.1).",
"Then we discuss the nonnegativity constraints over entity representations ( § 3.2) and the approximate entailment constraints over relation representations ( § 3.3) .",
"And finally we present the overall model ( § 3.4).",
"A Basic Embedding Model We choose ComplEx (Trouillon et al., 2016) as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance.",
"Specifically, suppose we are given a KG containing a set of triples O = {(e i , r k , e j )}, with each triple composed of two entities e i , e j ∈ E and their relation r k ∈ R. Here E is the set of entities and R the set of relations.",
"ComplEx then represents each entity e ∈ E as a complex-valued vector e ∈ C d , and each relation r ∈ R a complex-valued vector r ∈ C d , where d is the dimensionality of the embedding space.",
"Each x ∈ C d consists of a real vector component Re(x) and an imaginary vector component Im(x), i.e., x = Re(x) + iIm(x).",
"For any given triple (e i , r k , e j ) ∈ E × R × E, a multilinear dot product is used to score that triple, i.e., φ(e i , r k , e j ) Re( e i , r k ,ē j ) Re( [e i ] [r k ] [ē j ] ), (1) where e i , r k , e j ∈ C d are the vectorial representations associated with e i , r k , e j , respectively;ē j is the conjugate of e j ; [·] is the -th entry of a vector; and Re(·) means taking the real part of a complex value.",
"Triples with higher φ(·, ·, ·) scores are more likely to be true.",
"Owing to the asymmetry of this scoring function, i.e., φ(e i , r k , e j ) = φ(e j , r k , e i ), ComplEx can effectively handle asymmetric relations (Trouillon et al., 2016) .",
"Non-negativity of Entity Representations On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations.",
"In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions.",
"In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation.",
"However, as pointed out by Murphy et al.",
"(2012) , it would be uneconomical to store all negative properties of an entity or a concept.",
"For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication.",
"Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations.",
"To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of [0, 1] d , as approximately Boolean embeddings (Kruszewski et al., 2015) , i.e., 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (2) where e ∈ C d is the representation for entity e ∈ E, with its real and imaginary components denoted by Re(e), Im(e) ∈ R d ; 0 and 1 are d-dimensional vectors with all their entries being 0 or 1; and ≥, ≤ , = denote the entry-wise comparisons throughout the paper whenever applicable.",
"As shown by Lee and Seung (1999) , non-negativity, in most cases, will further induce sparsity and interpretability.",
"Approximate Entailment for Relations Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations.",
"By approximate entailment, we mean an ordered pair of relations that the former approximately entails the latter, e.g., BornInCountry and Nationality, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country.",
"Each such relation pair is associated with a weight to indicate the confidence level of entailment.",
"A larger weight stands for a higher level of confidence.",
"We denote by r p λ − → r q the approximate entailment between relations r p and r q , with confidence level λ.",
"This kind of entailment can be derived automatically from a KG by modern rule mining systems (Galárraga et al., 2015) .",
"Let T denote the set of all such approximate entailments derived beforehand.",
"Before diving into approximate entailment, we first explore the modeling of strict entailment, i.e., entailment with infinite confidence level λ = +∞.",
"The strict entailment r p → r q states that if relation r p holds then relation r q must also hold.",
"This entailment can be roughly modelled by requiring φ(e i , r p , e j ) ≤ φ(e i , r q , e j ), ∀e i , e j ∈ E, (3) where φ(·, ·, ·) is the score for a triple predicted by the embedding model, defined by Eq.",
"(1).",
"Eq.",
"(3) can be interpreted as follows: for any two entities e i and e j , if (e i , r p , e j ) is a true fact with a high score φ(e i , r p , e j ), then the triple (e i , r q , e j ) with an even higher score should also be predicted as a true fact by the embedding model.",
"Note that given the non-negativity constraints defined by Eq.",
"(2), a sufficient condition for Eq.",
"(3) to hold, is to further impose Re(r p ) ≤ Re(r q ), Im(r p ) = Im(r q ), (4) where r p and r q are the complex-valued representations for r p and r q respectively, with the real and imaginary components denoted by Re(·), Im(·) ∈ R d .",
"That means, when the constraints of Eq.",
"(4) (along with those of Eq.",
"(2)) are satisfied, the requirement of Eq.",
"(3) (or in other words r p → r q ) will always hold.",
"We provide a proof of sufficiency as supplementary material.",
"Next we examine the modeling of approximate entailment.",
"To this end, we further introduce the confidence level λ and allow slackness in Eq.",
"(4), which yields λ Re(r p ) − Re(r q ) ≤ α, (5) λ Im(r p ) − Im(r q ) 2 ≤ β.",
"(6) Here α, β ≥ 0 are slack variables, and (·) 2 means an entry-wise operation.",
"Entailments with higher confidence levels show less tolerance for violating the constraints.",
"When λ = +∞, Eqs.",
"(5) -(6) degenerate to Eq.",
"(4).",
"The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible (e i , e j ) entity pairs (i.e., grounding).",
"In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty.",
"The Overall Model Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations.",
"The overall model is presented as follows: min Θ,{α,β} D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T 1 (α + β) + η Θ 2 2 , s.t.",
"λ Re(r p ) − Re(r q ) ≤ α, λ Im(r p ) − Im(r q ) 2 ≤ β, α, β ≥ 0, ∀r p λ − → r q ∈ T , 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E. (7) Here, Θ {e : e ∈ E} ∪ {r : r ∈ R} is the set of all entity and relation representations; D + and D − are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, i.e., (e i , r k , e j ) ∈ O; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, i.e., (e i , r k , e j ) or (e i , r k , e j ); y ijk = ±1 is the label (positive or negative) of triple (e i , r k , e j ).",
"In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels.",
"The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient µ ≥ 0.",
"The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied.",
"The last term is L 2 regularization to avoid over-fitting, and η ≥ 0 is the regularization coefficient.",
"To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are.",
"As such, the optimization problem of Eq.",
"(7) can be rewritten as: min Θ D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T λ1 Re(r p )−Re(r q ) + + µ T λ1 Im(r p )−Im(r q ) 2 + η Θ 2 2 , s.t.",
"0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (8) where [x] + = max(0, x) with max(·, ·) being an entry-wise operation.",
"The equivalence between Eq.",
"(7) and Eq.",
"(8) is shown in the supplementary material.",
"We use SGD in mini-batch mode as our optimizer, with AdaGrad (Duchi et al., 2011) to tune the learning rate.",
"After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of [0, 1] d , to satisfy the non-negativity constraints.",
"While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity.",
"Our approach has a space complexity of O(nd + md), which is the same as that of ComplEx.",
"Here, n is the number of entities, m the number of relations, and O(nd + md) to store a d-dimensional complex-valued vector for each entity and each relation.",
"The time complexity (per iteration) of our approach is O(sd+td+nd), where s is the average number of triples in a mini-batch,n the average number of entities in a mini-batch, and t the total number of approximate entailments in T .",
"O(sd) is to handle triples in a mini-batch, O(td) penalty terms introduced by the approximate entailments, and O(nd) further the non-negativity constraints on entity representations.",
"Usually there are much fewer entailments than triples, i.e., t s, and alsō n ≤ 2s.",
"1 So the time complexity of our approach is on a par with O(sd), i.e., the time complexity of ComplEx.",
"Experiments and Results This section presents our experiments and results.",
"We first introduce the datasets used in our experiments ( § 4.1).",
"Then we empirically evaluate our approach in the link prediction task ( § 4.2).",
"After that, we conduct extensive analysis on both entity representations ( § 4.3) and relation representations ( § 4.4) to show the interpretability of our model.",
"1 There will be at most 2s entities contained in s triples.",
"Code and data used in the experiments are available at https://github.com/iieir-km/ ComplEx-NNE_AER.",
"Datasets The first two datasets we used are WN18 and F-B15K, released by Bordes et al.",
"(2013) .",
"2 WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities.",
"We create our third dataset from the mapping-based objects of core DBpedia.",
"3 We eliminate relations not included within the DBpedia ontology such as HomePage and Logo, and discard entities appearing less than 20 times.",
"The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities.",
"Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively.",
"We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K.",
"We further use AMIE+ (Galárraga et al., 2015) 4 to extract approximate entailments automatically from the training set of each dataset.",
"As suggested by Guo et al.",
"(2018) , we consider entailments with PCA confidence higher than 0.8.",
"5 As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K.",
"Table 1 gives some examples of these approximate entailments, along with their confidence levels.",
"Table 2 further summarizes the statistics of the datasets.",
"Link Prediction We first evaluate our approach in the link prediction task, which aims to predict a triple (e i , r k , e j ) with e i or e j missing, i.e., predict e i given (r k , e j ) or predict e j given (e i , r k ).",
"Evaluation Protocol: We follow the protocol introduced by Bordes et al.",
"(2013) .",
"For each test triple (e i , r k , e j ), we replace its head entity e i with every entity e i ∈ E, and calculate a score for the corrupted triple (e i , r k , e j ), e.g., φ(e i , r k , e j ) defined by Eq.",
"(1).",
"Then we sort these scores in de- scending order, and get the rank of the correct entity e i .",
"During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, i.e., the filtered setting as described in (Bordes et al., 2013) .",
"This whole procedure is repeated while replacing the tail entity e j .",
"We report on the test set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (HITS@N), with n = 1, 3, 10.",
"Comparison Settings: We compare the performance of our approach against a variety of KG embedding models developed in recent years.",
"These models can be categorized into three groups: • Simple embedding models that utilize triples alone without integrating extra information, including TransE (Bordes et al., 2013) , Dist-Mult (Yang et al., 2015) , HolE (Nickel et al., 2016b) , ComplEx (Trouillon et al., 2016) , and ANALOGY (Liu et al., 2017) .",
"Our approach is developed on the basis of ComplEx.",
"• Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE (Guo et al., 2018) and ComplEx R (Minervini et al., 2017a) .",
"The former requires grounding of first-order logic rules.",
"The latter is restricted to relation equiv-alence and inversion, and assigns an identical confidence level to all different rules.",
"• Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and F-B15K, including R-GCN (Schlichtkrull et al., 2017) , ConvE (Dettmers et al., 2018) , and Single DistMult (Kadlec et al., 2017) .",
"6 The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models.",
"The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time.",
"We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the Non-Negativity constraints on Entity representations, i.e., optimization Eq.",
"(8) with µ = 0; and (ii) ComplEx-NNE+AER that further imposes the Approximate Entailment constraints over Relation representations besides those non-negativity ones, i.e., optimization Eq.",
"(8) with µ > 0.",
"Implementation Details: We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K.",
"We directly report their original results on these two datasets to avoid re-implementation bias.",
"On DB100K, the newly created dataset, we take the first two groups of baselines, i.e., those simple embedding models and ComplEx extensions with logical background knowledge incorporated.",
"We do not use the third group of baselines due to efficiency and complexity issues.",
"We use the code provided by Trouillon et al.",
"(2016) 7 for TransE, DistMult, and ComplEx, and the code released by their authors for ANAL-OGY 8 and RUGE 9 .",
"We re-implement HolE and ComplEx R so that all the baselines (as well as our approach) share the same optimization mode, i.e., SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.",
"10 We follow Trouillon et al.",
"(2016) to adopt a ranking loss for TransE and a logistic loss for all the other methods.",
"(Trouillon et al., 2016) .",
"Results for the other baselines are taken from the original papers.",
"Missing scores not reported in the literature are indicated by \"-\".",
"Best scores are highlighted in bold, and \" * \" indicates statistically significant improvements over ComplEx.",
"Table 4 : Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by \" * \".",
"Among those baselines, RUGE and ComplEx R require additional logical background knowledge.",
"RUGE makes use of soft rules, which are extracted by AMIE+ from the training sets.",
"As suggested by Guo et al.",
"(2018) , length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized.",
"Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8.",
"But it only considers entailments between a pair of relations, i.e., length-1 rules.",
"ComplEx R takes into account equivalence and inversion between relations.",
"We derive such axioms directly from our approximate entailments.",
"If r p λ 1 − → r q and r q λ 2 − → r p with λ 1 , λ 2 > 0.8, we think relations r p and r q are equivalent.",
"And similarly, if r −1 p λ 1 − → r q and r −1 q λ 2 − → r p with λ 1 , λ 2 > 0.8, we consider r p as an inverse of r q .",
"For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set.",
"Specifically, we tune the embedding size d ∈ {100, 150, 200}, the L 2 regularization coefficient η ∈ {0.001, 0.003, 0.01, 0.03, 0.1}, the ratio of negative over positive training examples α ∈ {2, 10}, and the initial learning rate γ ∈ {0.01, 0.05, 0.1, 0.5, 1.0}.",
"For TransE, we tune the margin of the ranking loss δ ∈ {0.1, 0.2, 0.5, 1, 2, 5, 10}.",
"Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors (Liu et al., 2017; Guo et al., 2018) .",
"After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER (µ in Eq.",
"(8)) in the range of {10 −5 , 10 −4 , · · · , 10 4 , 10 5 }, with all its other hyperparameters fixed to their optimal configurations.",
"We then directly set µ = 0 to get the optimal ComplEx-NNE model.",
"The weight of soft constraints in ComplEx R is tuned in the same range as µ.",
"The optimal configurations for our approach are: d = 200, η = 0.03, α = 10, γ = 1.0, µ = 10 on WN18; d = 200, η = 0.01, α = 10, γ = 0.5, µ = 10 −3 on FB15K; and d = 150, η = 0.03, α = 10, γ = 0.1, µ = 10 −5 on DB100K.",
"Experimental Results: Table 3 presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from previous literature.",
"Table 4 further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting.",
"On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test.",
"The reciprocal rank or HITS@N value with n = 1, 3, 10 for each test triple is used as paired data.",
"The symbol \" * \" indicates a significance level of p < 0.05.",
"The results demonstrate that imposing the nonnegativity and approximate entailment constraints indeed improves KG embedding.",
"ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18).",
"More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplEx R , and even the complicated developments or implementations like ConvE or Single DistMult.",
"This demonstrates the superiority of our approach.",
"Analysis on Entity Representations This section inspects how the structure of the entity embedding space changes when the constraints are imposed.",
"We first provide the visualization of entity representations on DB100K.",
"On this dataset each entity is associated with a single type label.",
"11 We pick 4 types reptile, wine region, species, and programming language, and randomly select 30 entities from each type.",
"Figure 1 visualizes with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated.",
"Then we investigate the semantic purity of these dimensions.",
"Specifically, we collect the representations of all the entities on DB100K (real components only).",
"For each dimension of these representations, top K percent of entities with the highest activation values on this dimension are picked.",
"We can calculate the entropy of the type distribution of the entities selected.",
"This entropy reflects diversity of entity types, or in other words, semantic purity.",
"If all the K percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity).",
"On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity).",
"Figure 2 AER, as K varies.",
"We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity.",
"We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena.",
"The results are given as supplementary material.",
"Analysis on Relation Representations This section further provides a visual inspection of the relation embedding space when the constraints are imposed.",
"To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.",
"12 We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure 3 , where for each relation we randomly pick 5 dimensions from both its real and imaginary components.",
"By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well.",
"Pairs of relations from the first class (equivalence) tend to have identical representations r p ≈ r q , those from the second class (inversion) complex conjugate representations r p ≈r q ; and the others representations that Re(r p ) ≤ Re(r q ) and Im(r p ) ≈ Im(r q ).",
"12 Equivalence and inversion are detected using heuristics introduced in § 4.2 (implementation details).",
"See the supplementary material for detailed properties of these three classes.",
"Conclusion This paper investigates the potential of using very simple constraints to improve KG embedding.",
"Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations.",
"Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity.",
"Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines.",
"The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space.",
"Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek.",
"2014"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Our Approach",
"A Basic Embedding Model",
"Non-negativity of Entity Representations",
"Approximate Entailment for Relations",
"The Overall Model",
"Experiments and Results",
"Datasets",
"Link Prediction",
"Analysis on Entity Representations",
"Analysis on Relation Representations",
"Conclusion"
]
} | GEM-SciDuet-train-121#paper-1331#slide-13 | Experimental setups cont | To complete a triple with or missing
Simple embedding models based on RDF triples
Other extensions of ComplEx incorporating logic rules
Recently developed neural network architectures
ComplEx-NNE: only with non-negativity constraints
ComplEx-NNE+AER: also with approximate entailment constraints | To complete a triple with or missing
Simple embedding models based on RDF triples
Other extensions of ComplEx incorporating logic rules
Recently developed neural network architectures
ComplEx-NNE: only with non-negativity constraints
ComplEx-NNE+AER: also with approximate entailment constraints | [] |
GEM-SciDuet-train-121#paper-1331#slide-14 | 1331 | Improving Knowledge Graph Embedding Using Simple Constraints | Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239
],
"paper_content_text": [
"Introduction The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), e.g., Freebase (Bollacker et al., 2008) , DBpedia (Lehmann et al., 2015) , and Google's Knowledge * Corresponding author: Quan Wang.",
"Vault (Dong et al., 2014) .",
"A typical KG is a multirelational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form (head entity, relation, tail entity).",
"Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks (Wasserman-Pritsker et al., 2015; Hoffmann et al., 2011; Yang and Mitchell, 2017) .",
"Recently, the concept of knowledge graph embedding has been presented and quickly become a hot research topic.",
"The key idea there is to embed components of a KG (i.e., entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG.",
"Early works on this topic learned such vectorial representations (i.e., embeddings) via just simple models developed over KG triples (Bordes et al., 2011 (Bordes et al., , 2013 Jenatton et al., 2012; Nickel et al., 2011) .",
"Recent attempts focused on either designing more complicated triple scoring models (Socher et al., 2013; Bordes et al., 2014; Wang et al., 2014; Lin et al., 2015b; Xiao et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , or incorporating extra information beyond KG triples (Chang et al., 2014; Zhong et al., 2015; Lin et al., 2015a; Neelakantan et al., 2015; Luo et al., 2015b; Xie et al., 2016a,b; Xiao et al., 2017) .",
"See (Wang et al., 2017) for a thorough review.",
"This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task.",
"Specifically, we examine two types of constraints: (i) non-negativity constraints on entity representations and (ii) approximate entailment constraints over relation representations.",
"By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability (Murphy et al., 2012) .",
"By using the latter, we further encode regularities of logical entailment between relations into their distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction (Rocktäschel et al., 2015; Guo et al., 2016) .",
"These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity.",
"Our work has some similarities to those which integrate logical background knowledge into KG embedding (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"Most of such works, however, need grounding of first-order logic rules.",
"The grounding process could be time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"(2016) tried to model rules using only relation representations.",
"But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities.",
"Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create.",
"Minervini et al.",
"(2017b) proposed adversarial training which can integrate first-order logic rules without grounding.",
"But their work, again, focuses on strict, hard rules.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of rules.",
"But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG.",
"Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, i.e., non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities.",
"We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well.",
"Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability.",
"The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space.",
"The remainder of this paper is organized as follows.",
"We first review related work in Section 2, and then detail our approach in Section 3.",
"Experiments and results are reported in Section 4, followed by concluding remarks in Section 5.",
"Related Work Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a.",
"KG embedding.",
"Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, e.g., TransE which takes relations as translating operations between head and tail entities (Bordes et al., 2013) , and RESCAL which models triples through bilinear operations over entity and relation representations (Nickel et al., 2011) .",
"Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, e.g., the TransE extensions (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015) , the RESCAL extensions (Yang et al., 2015; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , and the (deep) neural network models (Socher et al., 2013; Bordes et al., 2014; Shi and Weninger, 2017; Schlichtkrull et al., 2017; Dettmers et al., 2018) ; (ii) those which tried to integrate extra information beyond triples, e.g., entity types Xie et al., 2016b) , relation paths (Neelakantan et al., 2015; Lin et al., 2015a) , and textual descriptions (Xie et al., 2016a; Xiao et al., 2017) .",
"Please refer to (Nickel et al., 2016a; Wang et al., 2017) for a thorough review of these techniques.",
"In this paper, we show the potential of using very simple constraints (i.e., nonnegativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity.",
"A line of research related to ours is KG embedding with logical background knowledge incorporated (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"Both works, however, can only handle strict, hard rules which usually require extensive effort to create.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of background knowledge.",
"But their work considers only equivalence and inversion between relations, which might not always be available in a given KG.",
"Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding.",
"And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs.",
"Non-negativity has long been a subject studied in various research fields.",
"Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability (Lee and Seung, 1999) .",
"In many NLP-related tasks, nonnegativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition (Murphy et al., 2012; Luo et al., 2015a; Fyshe et al., 2015) .",
"In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability.",
"Our Approach This section presents our approach.",
"We first introduce a basic embedding technique to model triples in a given KG ( § 3.1).",
"Then we discuss the nonnegativity constraints over entity representations ( § 3.2) and the approximate entailment constraints over relation representations ( § 3.3) .",
"And finally we present the overall model ( § 3.4).",
"A Basic Embedding Model We choose ComplEx (Trouillon et al., 2016) as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance.",
"Specifically, suppose we are given a KG containing a set of triples O = {(e i , r k , e j )}, with each triple composed of two entities e i , e j ∈ E and their relation r k ∈ R. Here E is the set of entities and R the set of relations.",
"ComplEx then represents each entity e ∈ E as a complex-valued vector e ∈ C d , and each relation r ∈ R a complex-valued vector r ∈ C d , where d is the dimensionality of the embedding space.",
"Each x ∈ C d consists of a real vector component Re(x) and an imaginary vector component Im(x), i.e., x = Re(x) + iIm(x).",
"For any given triple (e i , r k , e j ) ∈ E × R × E, a multilinear dot product is used to score that triple, i.e., φ(e i , r k , e j ) Re( e i , r k ,ē j ) Re( [e i ] [r k ] [ē j ] ), (1) where e i , r k , e j ∈ C d are the vectorial representations associated with e i , r k , e j , respectively;ē j is the conjugate of e j ; [·] is the -th entry of a vector; and Re(·) means taking the real part of a complex value.",
"Triples with higher φ(·, ·, ·) scores are more likely to be true.",
"Owing to the asymmetry of this scoring function, i.e., φ(e i , r k , e j ) = φ(e j , r k , e i ), ComplEx can effectively handle asymmetric relations (Trouillon et al., 2016) .",
"Non-negativity of Entity Representations On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations.",
"In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions.",
"In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation.",
"However, as pointed out by Murphy et al.",
"(2012) , it would be uneconomical to store all negative properties of an entity or a concept.",
"For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication.",
"Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations.",
"To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of [0, 1] d , as approximately Boolean embeddings (Kruszewski et al., 2015) , i.e., 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (2) where e ∈ C d is the representation for entity e ∈ E, with its real and imaginary components denoted by Re(e), Im(e) ∈ R d ; 0 and 1 are d-dimensional vectors with all their entries being 0 or 1; and ≥, ≤ , = denote the entry-wise comparisons throughout the paper whenever applicable.",
"As shown by Lee and Seung (1999) , non-negativity, in most cases, will further induce sparsity and interpretability.",
"Approximate Entailment for Relations Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations.",
"By approximate entailment, we mean an ordered pair of relations that the former approximately entails the latter, e.g., BornInCountry and Nationality, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country.",
"Each such relation pair is associated with a weight to indicate the confidence level of entailment.",
"A larger weight stands for a higher level of confidence.",
"We denote by r p λ − → r q the approximate entailment between relations r p and r q , with confidence level λ.",
"This kind of entailment can be derived automatically from a KG by modern rule mining systems (Galárraga et al., 2015) .",
"Let T denote the set of all such approximate entailments derived beforehand.",
"Before diving into approximate entailment, we first explore the modeling of strict entailment, i.e., entailment with infinite confidence level λ = +∞.",
"The strict entailment r p → r q states that if relation r p holds then relation r q must also hold.",
"This entailment can be roughly modelled by requiring φ(e i , r p , e j ) ≤ φ(e i , r q , e j ), ∀e i , e j ∈ E, (3) where φ(·, ·, ·) is the score for a triple predicted by the embedding model, defined by Eq.",
"(1).",
"Eq.",
"(3) can be interpreted as follows: for any two entities e i and e j , if (e i , r p , e j ) is a true fact with a high score φ(e i , r p , e j ), then the triple (e i , r q , e j ) with an even higher score should also be predicted as a true fact by the embedding model.",
"Note that given the non-negativity constraints defined by Eq.",
"(2), a sufficient condition for Eq.",
"(3) to hold, is to further impose Re(r p ) ≤ Re(r q ), Im(r p ) = Im(r q ), (4) where r p and r q are the complex-valued representations for r p and r q respectively, with the real and imaginary components denoted by Re(·), Im(·) ∈ R d .",
"That means, when the constraints of Eq.",
"(4) (along with those of Eq.",
"(2)) are satisfied, the requirement of Eq.",
"(3) (or in other words r p → r q ) will always hold.",
"We provide a proof of sufficiency as supplementary material.",
"Next we examine the modeling of approximate entailment.",
"To this end, we further introduce the confidence level λ and allow slackness in Eq.",
"(4), which yields λ Re(r p ) − Re(r q ) ≤ α, (5) λ Im(r p ) − Im(r q ) 2 ≤ β.",
"(6) Here α, β ≥ 0 are slack variables, and (·) 2 means an entry-wise operation.",
"Entailments with higher confidence levels show less tolerance for violating the constraints.",
"When λ = +∞, Eqs.",
"(5) -(6) degenerate to Eq.",
"(4).",
"The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible (e i , e j ) entity pairs (i.e., grounding).",
"In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty.",
"The Overall Model Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations.",
"The overall model is presented as follows: min Θ,{α,β} D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T 1 (α + β) + η Θ 2 2 , s.t.",
"λ Re(r p ) − Re(r q ) ≤ α, λ Im(r p ) − Im(r q ) 2 ≤ β, α, β ≥ 0, ∀r p λ − → r q ∈ T , 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E. (7) Here, Θ {e : e ∈ E} ∪ {r : r ∈ R} is the set of all entity and relation representations; D + and D − are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, i.e., (e i , r k , e j ) ∈ O; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, i.e., (e i , r k , e j ) or (e i , r k , e j ); y ijk = ±1 is the label (positive or negative) of triple (e i , r k , e j ).",
"In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels.",
"The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient µ ≥ 0.",
"The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied.",
"The last term is L 2 regularization to avoid over-fitting, and η ≥ 0 is the regularization coefficient.",
"To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are.",
"As such, the optimization problem of Eq.",
"(7) can be rewritten as: min Θ D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T λ1 Re(r p )−Re(r q ) + + µ T λ1 Im(r p )−Im(r q ) 2 + η Θ 2 2 , s.t.",
"0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (8) where [x] + = max(0, x) with max(·, ·) being an entry-wise operation.",
"The equivalence between Eq.",
"(7) and Eq.",
"(8) is shown in the supplementary material.",
"We use SGD in mini-batch mode as our optimizer, with AdaGrad (Duchi et al., 2011) to tune the learning rate.",
"After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of [0, 1] d , to satisfy the non-negativity constraints.",
"While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity.",
"Our approach has a space complexity of O(nd + md), which is the same as that of ComplEx.",
"Here, n is the number of entities, m the number of relations, and O(nd + md) to store a d-dimensional complex-valued vector for each entity and each relation.",
"The time complexity (per iteration) of our approach is O(sd+td+nd), where s is the average number of triples in a mini-batch,n the average number of entities in a mini-batch, and t the total number of approximate entailments in T .",
"O(sd) is to handle triples in a mini-batch, O(td) penalty terms introduced by the approximate entailments, and O(nd) further the non-negativity constraints on entity representations.",
"Usually there are much fewer entailments than triples, i.e., t s, and alsō n ≤ 2s.",
"1 So the time complexity of our approach is on a par with O(sd), i.e., the time complexity of ComplEx.",
"Experiments and Results This section presents our experiments and results.",
"We first introduce the datasets used in our experiments ( § 4.1).",
"Then we empirically evaluate our approach in the link prediction task ( § 4.2).",
"After that, we conduct extensive analysis on both entity representations ( § 4.3) and relation representations ( § 4.4) to show the interpretability of our model.",
"1 There will be at most 2s entities contained in s triples.",
"Code and data used in the experiments are available at https://github.com/iieir-km/ ComplEx-NNE_AER.",
"Datasets The first two datasets we used are WN18 and F-B15K, released by Bordes et al.",
"(2013) .",
"2 WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities.",
"We create our third dataset from the mapping-based objects of core DBpedia.",
"3 We eliminate relations not included within the DBpedia ontology such as HomePage and Logo, and discard entities appearing less than 20 times.",
"The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities.",
"Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively.",
"We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K.",
"We further use AMIE+ (Galárraga et al., 2015) 4 to extract approximate entailments automatically from the training set of each dataset.",
"As suggested by Guo et al.",
"(2018) , we consider entailments with PCA confidence higher than 0.8.",
"5 As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K.",
"Table 1 gives some examples of these approximate entailments, along with their confidence levels.",
"Table 2 further summarizes the statistics of the datasets.",
"Link Prediction We first evaluate our approach in the link prediction task, which aims to predict a triple (e i , r k , e j ) with e i or e j missing, i.e., predict e i given (r k , e j ) or predict e j given (e i , r k ).",
"Evaluation Protocol: We follow the protocol introduced by Bordes et al.",
"(2013) .",
"For each test triple (e i , r k , e j ), we replace its head entity e i with every entity e i ∈ E, and calculate a score for the corrupted triple (e i , r k , e j ), e.g., φ(e i , r k , e j ) defined by Eq.",
"(1).",
"Then we sort these scores in de- scending order, and get the rank of the correct entity e i .",
"During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, i.e., the filtered setting as described in (Bordes et al., 2013) .",
"This whole procedure is repeated while replacing the tail entity e j .",
"We report on the test set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (HITS@N), with n = 1, 3, 10.",
"Comparison Settings: We compare the performance of our approach against a variety of KG embedding models developed in recent years.",
"These models can be categorized into three groups: • Simple embedding models that utilize triples alone without integrating extra information, including TransE (Bordes et al., 2013) , Dist-Mult (Yang et al., 2015) , HolE (Nickel et al., 2016b) , ComplEx (Trouillon et al., 2016) , and ANALOGY (Liu et al., 2017) .",
"Our approach is developed on the basis of ComplEx.",
"• Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE (Guo et al., 2018) and ComplEx R (Minervini et al., 2017a) .",
"The former requires grounding of first-order logic rules.",
"The latter is restricted to relation equiv-alence and inversion, and assigns an identical confidence level to all different rules.",
"• Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and F-B15K, including R-GCN (Schlichtkrull et al., 2017) , ConvE (Dettmers et al., 2018) , and Single DistMult (Kadlec et al., 2017) .",
"6 The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models.",
"The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time.",
"We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the Non-Negativity constraints on Entity representations, i.e., optimization Eq.",
"(8) with µ = 0; and (ii) ComplEx-NNE+AER that further imposes the Approximate Entailment constraints over Relation representations besides those non-negativity ones, i.e., optimization Eq.",
"(8) with µ > 0.",
"Implementation Details: We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K.",
"We directly report their original results on these two datasets to avoid re-implementation bias.",
"On DB100K, the newly created dataset, we take the first two groups of baselines, i.e., those simple embedding models and ComplEx extensions with logical background knowledge incorporated.",
"We do not use the third group of baselines due to efficiency and complexity issues.",
"We use the code provided by Trouillon et al.",
"(2016) 7 for TransE, DistMult, and ComplEx, and the code released by their authors for ANAL-OGY 8 and RUGE 9 .",
"We re-implement HolE and ComplEx R so that all the baselines (as well as our approach) share the same optimization mode, i.e., SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.",
"10 We follow Trouillon et al.",
"(2016) to adopt a ranking loss for TransE and a logistic loss for all the other methods.",
"(Trouillon et al., 2016) .",
"Results for the other baselines are taken from the original papers.",
"Missing scores not reported in the literature are indicated by \"-\".",
"Best scores are highlighted in bold, and \" * \" indicates statistically significant improvements over ComplEx.",
"Table 4 : Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by \" * \".",
"Among those baselines, RUGE and ComplEx R require additional logical background knowledge.",
"RUGE makes use of soft rules, which are extracted by AMIE+ from the training sets.",
"As suggested by Guo et al.",
"(2018) , length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized.",
"Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8.",
"But it only considers entailments between a pair of relations, i.e., length-1 rules.",
"ComplEx R takes into account equivalence and inversion between relations.",
"We derive such axioms directly from our approximate entailments.",
"If r p λ 1 − → r q and r q λ 2 − → r p with λ 1 , λ 2 > 0.8, we think relations r p and r q are equivalent.",
"And similarly, if r −1 p λ 1 − → r q and r −1 q λ 2 − → r p with λ 1 , λ 2 > 0.8, we consider r p as an inverse of r q .",
"For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set.",
"Specifically, we tune the embedding size d ∈ {100, 150, 200}, the L 2 regularization coefficient η ∈ {0.001, 0.003, 0.01, 0.03, 0.1}, the ratio of negative over positive training examples α ∈ {2, 10}, and the initial learning rate γ ∈ {0.01, 0.05, 0.1, 0.5, 1.0}.",
"For TransE, we tune the margin of the ranking loss δ ∈ {0.1, 0.2, 0.5, 1, 2, 5, 10}.",
"Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors (Liu et al., 2017; Guo et al., 2018) .",
"After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER (µ in Eq.",
"(8)) in the range of {10 −5 , 10 −4 , · · · , 10 4 , 10 5 }, with all its other hyperparameters fixed to their optimal configurations.",
"We then directly set µ = 0 to get the optimal ComplEx-NNE model.",
"The weight of soft constraints in ComplEx R is tuned in the same range as µ.",
"The optimal configurations for our approach are: d = 200, η = 0.03, α = 10, γ = 1.0, µ = 10 on WN18; d = 200, η = 0.01, α = 10, γ = 0.5, µ = 10 −3 on FB15K; and d = 150, η = 0.03, α = 10, γ = 0.1, µ = 10 −5 on DB100K.",
"Experimental Results: Table 3 presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from previous literature.",
"Table 4 further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting.",
"On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test.",
"The reciprocal rank or HITS@N value with n = 1, 3, 10 for each test triple is used as paired data.",
"The symbol \" * \" indicates a significance level of p < 0.05.",
"The results demonstrate that imposing the nonnegativity and approximate entailment constraints indeed improves KG embedding.",
"ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18).",
"More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplEx R , and even the complicated developments or implementations like ConvE or Single DistMult.",
"This demonstrates the superiority of our approach.",
"Analysis on Entity Representations This section inspects how the structure of the entity embedding space changes when the constraints are imposed.",
"We first provide the visualization of entity representations on DB100K.",
"On this dataset each entity is associated with a single type label.",
"11 We pick 4 types reptile, wine region, species, and programming language, and randomly select 30 entities from each type.",
"Figure 1 visualizes with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated.",
"Then we investigate the semantic purity of these dimensions.",
"Specifically, we collect the representations of all the entities on DB100K (real components only).",
"For each dimension of these representations, top K percent of entities with the highest activation values on this dimension are picked.",
"We can calculate the entropy of the type distribution of the entities selected.",
"This entropy reflects diversity of entity types, or in other words, semantic purity.",
"If all the K percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity).",
"On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity).",
"Figure 2 AER, as K varies.",
"We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity.",
"We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena.",
"The results are given as supplementary material.",
"Analysis on Relation Representations This section further provides a visual inspection of the relation embedding space when the constraints are imposed.",
"To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.",
"12 We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure 3 , where for each relation we randomly pick 5 dimensions from both its real and imaginary components.",
"By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well.",
"Pairs of relations from the first class (equivalence) tend to have identical representations r p ≈ r q , those from the second class (inversion) complex conjugate representations r p ≈r q ; and the others representations that Re(r p ) ≤ Re(r q ) and Im(r p ) ≈ Im(r q ).",
"12 Equivalence and inversion are detected using heuristics introduced in § 4.2 (implementation details).",
"See the supplementary material for detailed properties of these three classes.",
"Conclusion This paper investigates the potential of using very simple constraints to improve KG embedding.",
"Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations.",
"Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity.",
"Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines.",
"The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space.",
"Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek.",
"2014"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Our Approach",
"A Basic Embedding Model",
"Non-negativity of Entity Representations",
"Approximate Entailment for Relations",
"The Overall Model",
"Experiments and Results",
"Datasets",
"Link Prediction",
"Analysis on Entity Representations",
"Analysis on Relation Representations",
"Conclusion"
]
} | GEM-SciDuet-train-121#paper-1331#slide-14 | Link prediction results | Simple embedding models Incorporating logic rules Neural network architectures
ComplEx-NNE+AER can beat very strong baselines just by introducing the simple constraints | Simple embedding models Incorporating logic rules Neural network architectures
ComplEx-NNE+AER can beat very strong baselines just by introducing the simple constraints | [] |
GEM-SciDuet-train-121#paper-1331#slide-15 | 1331 | Improving Knowledge Graph Embedding Using Simple Constraints | Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239
],
"paper_content_text": [
"Introduction The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), e.g., Freebase (Bollacker et al., 2008) , DBpedia (Lehmann et al., 2015) , and Google's Knowledge * Corresponding author: Quan Wang.",
"Vault (Dong et al., 2014) .",
"A typical KG is a multirelational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form (head entity, relation, tail entity).",
"Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks (Wasserman-Pritsker et al., 2015; Hoffmann et al., 2011; Yang and Mitchell, 2017) .",
"Recently, the concept of knowledge graph embedding has been presented and quickly become a hot research topic.",
"The key idea there is to embed components of a KG (i.e., entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG.",
"Early works on this topic learned such vectorial representations (i.e., embeddings) via just simple models developed over KG triples (Bordes et al., 2011 (Bordes et al., , 2013 Jenatton et al., 2012; Nickel et al., 2011) .",
"Recent attempts focused on either designing more complicated triple scoring models (Socher et al., 2013; Bordes et al., 2014; Wang et al., 2014; Lin et al., 2015b; Xiao et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , or incorporating extra information beyond KG triples (Chang et al., 2014; Zhong et al., 2015; Lin et al., 2015a; Neelakantan et al., 2015; Luo et al., 2015b; Xie et al., 2016a,b; Xiao et al., 2017) .",
"See (Wang et al., 2017) for a thorough review.",
"This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task.",
"Specifically, we examine two types of constraints: (i) non-negativity constraints on entity representations and (ii) approximate entailment constraints over relation representations.",
"By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability (Murphy et al., 2012) .",
"By using the latter, we further encode regularities of logical entailment between relations into their distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction (Rocktäschel et al., 2015; Guo et al., 2016) .",
"These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity.",
"Our work has some similarities to those which integrate logical background knowledge into KG embedding (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"Most of such works, however, need grounding of first-order logic rules.",
"The grounding process could be time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"(2016) tried to model rules using only relation representations.",
"But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities.",
"Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create.",
"Minervini et al.",
"(2017b) proposed adversarial training which can integrate first-order logic rules without grounding.",
"But their work, again, focuses on strict, hard rules.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of rules.",
"But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG.",
"Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, i.e., non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities.",
"We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well.",
"Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability.",
"The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space.",
"The remainder of this paper is organized as follows.",
"We first review related work in Section 2, and then detail our approach in Section 3.",
"Experiments and results are reported in Section 4, followed by concluding remarks in Section 5.",
"Related Work Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a.",
"KG embedding.",
"Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, e.g., TransE which takes relations as translating operations between head and tail entities (Bordes et al., 2013) , and RESCAL which models triples through bilinear operations over entity and relation representations (Nickel et al., 2011) .",
"Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, e.g., the TransE extensions (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015) , the RESCAL extensions (Yang et al., 2015; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , and the (deep) neural network models (Socher et al., 2013; Bordes et al., 2014; Shi and Weninger, 2017; Schlichtkrull et al., 2017; Dettmers et al., 2018) ; (ii) those which tried to integrate extra information beyond triples, e.g., entity types Xie et al., 2016b) , relation paths (Neelakantan et al., 2015; Lin et al., 2015a) , and textual descriptions (Xie et al., 2016a; Xiao et al., 2017) .",
"Please refer to (Nickel et al., 2016a; Wang et al., 2017) for a thorough review of these techniques.",
"In this paper, we show the potential of using very simple constraints (i.e., nonnegativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity.",
"A line of research related to ours is KG embedding with logical background knowledge incorporated (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"Both works, however, can only handle strict, hard rules which usually require extensive effort to create.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of background knowledge.",
"But their work considers only equivalence and inversion between relations, which might not always be available in a given KG.",
"Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding.",
"And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs.",
"Non-negativity has long been a subject studied in various research fields.",
"Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability (Lee and Seung, 1999) .",
"In many NLP-related tasks, nonnegativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition (Murphy et al., 2012; Luo et al., 2015a; Fyshe et al., 2015) .",
"In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability.",
"Our Approach This section presents our approach.",
"We first introduce a basic embedding technique to model triples in a given KG ( § 3.1).",
"Then we discuss the nonnegativity constraints over entity representations ( § 3.2) and the approximate entailment constraints over relation representations ( § 3.3) .",
"And finally we present the overall model ( § 3.4).",
"A Basic Embedding Model We choose ComplEx (Trouillon et al., 2016) as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance.",
"Specifically, suppose we are given a KG containing a set of triples O = {(e i , r k , e j )}, with each triple composed of two entities e i , e j ∈ E and their relation r k ∈ R. Here E is the set of entities and R the set of relations.",
"ComplEx then represents each entity e ∈ E as a complex-valued vector e ∈ C d , and each relation r ∈ R a complex-valued vector r ∈ C d , where d is the dimensionality of the embedding space.",
"Each x ∈ C d consists of a real vector component Re(x) and an imaginary vector component Im(x), i.e., x = Re(x) + iIm(x).",
"For any given triple (e i , r k , e j ) ∈ E × R × E, a multilinear dot product is used to score that triple, i.e., φ(e i , r k , e j ) Re( e i , r k ,ē j ) Re( [e i ] [r k ] [ē j ] ), (1) where e i , r k , e j ∈ C d are the vectorial representations associated with e i , r k , e j , respectively;ē j is the conjugate of e j ; [·] is the -th entry of a vector; and Re(·) means taking the real part of a complex value.",
"Triples with higher φ(·, ·, ·) scores are more likely to be true.",
"Owing to the asymmetry of this scoring function, i.e., φ(e i , r k , e j ) = φ(e j , r k , e i ), ComplEx can effectively handle asymmetric relations (Trouillon et al., 2016) .",
"Non-negativity of Entity Representations On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations.",
"In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions.",
"In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation.",
"However, as pointed out by Murphy et al.",
"(2012) , it would be uneconomical to store all negative properties of an entity or a concept.",
"For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication.",
"Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations.",
"To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of [0, 1] d , as approximately Boolean embeddings (Kruszewski et al., 2015) , i.e., 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (2) where e ∈ C d is the representation for entity e ∈ E, with its real and imaginary components denoted by Re(e), Im(e) ∈ R d ; 0 and 1 are d-dimensional vectors with all their entries being 0 or 1; and ≥, ≤ , = denote the entry-wise comparisons throughout the paper whenever applicable.",
"As shown by Lee and Seung (1999) , non-negativity, in most cases, will further induce sparsity and interpretability.",
"Approximate Entailment for Relations Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations.",
"By approximate entailment, we mean an ordered pair of relations that the former approximately entails the latter, e.g., BornInCountry and Nationality, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country.",
"Each such relation pair is associated with a weight to indicate the confidence level of entailment.",
"A larger weight stands for a higher level of confidence.",
"We denote by r p λ − → r q the approximate entailment between relations r p and r q , with confidence level λ.",
"This kind of entailment can be derived automatically from a KG by modern rule mining systems (Galárraga et al., 2015) .",
"Let T denote the set of all such approximate entailments derived beforehand.",
"Before diving into approximate entailment, we first explore the modeling of strict entailment, i.e., entailment with infinite confidence level λ = +∞.",
"The strict entailment r p → r q states that if relation r p holds then relation r q must also hold.",
"This entailment can be roughly modelled by requiring φ(e i , r p , e j ) ≤ φ(e i , r q , e j ), ∀e i , e j ∈ E, (3) where φ(·, ·, ·) is the score for a triple predicted by the embedding model, defined by Eq.",
"(1).",
"Eq.",
"(3) can be interpreted as follows: for any two entities e i and e j , if (e i , r p , e j ) is a true fact with a high score φ(e i , r p , e j ), then the triple (e i , r q , e j ) with an even higher score should also be predicted as a true fact by the embedding model.",
"Note that given the non-negativity constraints defined by Eq.",
"(2), a sufficient condition for Eq.",
"(3) to hold, is to further impose Re(r p ) ≤ Re(r q ), Im(r p ) = Im(r q ), (4) where r p and r q are the complex-valued representations for r p and r q respectively, with the real and imaginary components denoted by Re(·), Im(·) ∈ R d .",
"That means, when the constraints of Eq.",
"(4) (along with those of Eq.",
"(2)) are satisfied, the requirement of Eq.",
"(3) (or in other words r p → r q ) will always hold.",
"We provide a proof of sufficiency as supplementary material.",
"Next we examine the modeling of approximate entailment.",
"To this end, we further introduce the confidence level λ and allow slackness in Eq.",
"(4), which yields λ Re(r p ) − Re(r q ) ≤ α, (5) λ Im(r p ) − Im(r q ) 2 ≤ β.",
"(6) Here α, β ≥ 0 are slack variables, and (·) 2 means an entry-wise operation.",
"Entailments with higher confidence levels show less tolerance for violating the constraints.",
"When λ = +∞, Eqs.",
"(5) -(6) degenerate to Eq.",
"(4).",
"The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible (e i , e j ) entity pairs (i.e., grounding).",
"In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty.",
"The Overall Model Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations.",
"The overall model is presented as follows: min Θ,{α,β} D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T 1 (α + β) + η Θ 2 2 , s.t.",
"λ Re(r p ) − Re(r q ) ≤ α, λ Im(r p ) − Im(r q ) 2 ≤ β, α, β ≥ 0, ∀r p λ − → r q ∈ T , 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E. (7) Here, Θ {e : e ∈ E} ∪ {r : r ∈ R} is the set of all entity and relation representations; D + and D − are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, i.e., (e i , r k , e j ) ∈ O; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, i.e., (e i , r k , e j ) or (e i , r k , e j ); y ijk = ±1 is the label (positive or negative) of triple (e i , r k , e j ).",
"In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels.",
"The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient µ ≥ 0.",
"The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied.",
"The last term is L 2 regularization to avoid over-fitting, and η ≥ 0 is the regularization coefficient.",
"To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are.",
"As such, the optimization problem of Eq.",
"(7) can be rewritten as: min Θ D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T λ1 Re(r p )−Re(r q ) + + µ T λ1 Im(r p )−Im(r q ) 2 + η Θ 2 2 , s.t.",
"0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (8) where [x] + = max(0, x) with max(·, ·) being an entry-wise operation.",
"The equivalence between Eq.",
"(7) and Eq.",
"(8) is shown in the supplementary material.",
"We use SGD in mini-batch mode as our optimizer, with AdaGrad (Duchi et al., 2011) to tune the learning rate.",
"After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of [0, 1] d , to satisfy the non-negativity constraints.",
"While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity.",
"Our approach has a space complexity of O(nd + md), which is the same as that of ComplEx.",
"Here, n is the number of entities, m the number of relations, and O(nd + md) to store a d-dimensional complex-valued vector for each entity and each relation.",
"The time complexity (per iteration) of our approach is O(sd+td+nd), where s is the average number of triples in a mini-batch,n the average number of entities in a mini-batch, and t the total number of approximate entailments in T .",
"O(sd) is to handle triples in a mini-batch, O(td) penalty terms introduced by the approximate entailments, and O(nd) further the non-negativity constraints on entity representations.",
"Usually there are much fewer entailments than triples, i.e., t s, and alsō n ≤ 2s.",
"1 So the time complexity of our approach is on a par with O(sd), i.e., the time complexity of ComplEx.",
"Experiments and Results This section presents our experiments and results.",
"We first introduce the datasets used in our experiments ( § 4.1).",
"Then we empirically evaluate our approach in the link prediction task ( § 4.2).",
"After that, we conduct extensive analysis on both entity representations ( § 4.3) and relation representations ( § 4.4) to show the interpretability of our model.",
"1 There will be at most 2s entities contained in s triples.",
"Code and data used in the experiments are available at https://github.com/iieir-km/ ComplEx-NNE_AER.",
"Datasets The first two datasets we used are WN18 and F-B15K, released by Bordes et al.",
"(2013) .",
"2 WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities.",
"We create our third dataset from the mapping-based objects of core DBpedia.",
"3 We eliminate relations not included within the DBpedia ontology such as HomePage and Logo, and discard entities appearing less than 20 times.",
"The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities.",
"Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively.",
"We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K.",
"We further use AMIE+ (Galárraga et al., 2015) 4 to extract approximate entailments automatically from the training set of each dataset.",
"As suggested by Guo et al.",
"(2018) , we consider entailments with PCA confidence higher than 0.8.",
"5 As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K.",
"Table 1 gives some examples of these approximate entailments, along with their confidence levels.",
"Table 2 further summarizes the statistics of the datasets.",
"Link Prediction We first evaluate our approach in the link prediction task, which aims to predict a triple (e i , r k , e j ) with e i or e j missing, i.e., predict e i given (r k , e j ) or predict e j given (e i , r k ).",
"Evaluation Protocol: We follow the protocol introduced by Bordes et al.",
"(2013) .",
"For each test triple (e i , r k , e j ), we replace its head entity e i with every entity e i ∈ E, and calculate a score for the corrupted triple (e i , r k , e j ), e.g., φ(e i , r k , e j ) defined by Eq.",
"(1).",
"Then we sort these scores in de- scending order, and get the rank of the correct entity e i .",
"During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, i.e., the filtered setting as described in (Bordes et al., 2013) .",
"This whole procedure is repeated while replacing the tail entity e j .",
"We report on the test set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (HITS@N), with n = 1, 3, 10.",
"Comparison Settings: We compare the performance of our approach against a variety of KG embedding models developed in recent years.",
"These models can be categorized into three groups: • Simple embedding models that utilize triples alone without integrating extra information, including TransE (Bordes et al., 2013) , Dist-Mult (Yang et al., 2015) , HolE (Nickel et al., 2016b) , ComplEx (Trouillon et al., 2016) , and ANALOGY (Liu et al., 2017) .",
"Our approach is developed on the basis of ComplEx.",
"• Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE (Guo et al., 2018) and ComplEx R (Minervini et al., 2017a) .",
"The former requires grounding of first-order logic rules.",
"The latter is restricted to relation equiv-alence and inversion, and assigns an identical confidence level to all different rules.",
"• Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and F-B15K, including R-GCN (Schlichtkrull et al., 2017) , ConvE (Dettmers et al., 2018) , and Single DistMult (Kadlec et al., 2017) .",
"6 The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models.",
"The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time.",
"We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the Non-Negativity constraints on Entity representations, i.e., optimization Eq.",
"(8) with µ = 0; and (ii) ComplEx-NNE+AER that further imposes the Approximate Entailment constraints over Relation representations besides those non-negativity ones, i.e., optimization Eq.",
"(8) with µ > 0.",
"Implementation Details: We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K.",
"We directly report their original results on these two datasets to avoid re-implementation bias.",
"On DB100K, the newly created dataset, we take the first two groups of baselines, i.e., those simple embedding models and ComplEx extensions with logical background knowledge incorporated.",
"We do not use the third group of baselines due to efficiency and complexity issues.",
"We use the code provided by Trouillon et al.",
"(2016) 7 for TransE, DistMult, and ComplEx, and the code released by their authors for ANAL-OGY 8 and RUGE 9 .",
"We re-implement HolE and ComplEx R so that all the baselines (as well as our approach) share the same optimization mode, i.e., SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.",
"10 We follow Trouillon et al.",
"(2016) to adopt a ranking loss for TransE and a logistic loss for all the other methods.",
"(Trouillon et al., 2016) .",
"Results for the other baselines are taken from the original papers.",
"Missing scores not reported in the literature are indicated by \"-\".",
"Best scores are highlighted in bold, and \" * \" indicates statistically significant improvements over ComplEx.",
"Table 4 : Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by \" * \".",
"Among those baselines, RUGE and ComplEx R require additional logical background knowledge.",
"RUGE makes use of soft rules, which are extracted by AMIE+ from the training sets.",
"As suggested by Guo et al.",
"(2018) , length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized.",
"Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8.",
"But it only considers entailments between a pair of relations, i.e., length-1 rules.",
"ComplEx R takes into account equivalence and inversion between relations.",
"We derive such axioms directly from our approximate entailments.",
"If r p λ 1 − → r q and r q λ 2 − → r p with λ 1 , λ 2 > 0.8, we think relations r p and r q are equivalent.",
"And similarly, if r −1 p λ 1 − → r q and r −1 q λ 2 − → r p with λ 1 , λ 2 > 0.8, we consider r p as an inverse of r q .",
"For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set.",
"Specifically, we tune the embedding size d ∈ {100, 150, 200}, the L 2 regularization coefficient η ∈ {0.001, 0.003, 0.01, 0.03, 0.1}, the ratio of negative over positive training examples α ∈ {2, 10}, and the initial learning rate γ ∈ {0.01, 0.05, 0.1, 0.5, 1.0}.",
"For TransE, we tune the margin of the ranking loss δ ∈ {0.1, 0.2, 0.5, 1, 2, 5, 10}.",
"Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors (Liu et al., 2017; Guo et al., 2018) .",
"After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER (µ in Eq.",
"(8)) in the range of {10 −5 , 10 −4 , · · · , 10 4 , 10 5 }, with all its other hyperparameters fixed to their optimal configurations.",
"We then directly set µ = 0 to get the optimal ComplEx-NNE model.",
"The weight of soft constraints in ComplEx R is tuned in the same range as µ.",
"The optimal configurations for our approach are: d = 200, η = 0.03, α = 10, γ = 1.0, µ = 10 on WN18; d = 200, η = 0.01, α = 10, γ = 0.5, µ = 10 −3 on FB15K; and d = 150, η = 0.03, α = 10, γ = 0.1, µ = 10 −5 on DB100K.",
"Experimental Results: Table 3 presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from previous literature.",
"Table 4 further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting.",
"On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test.",
"The reciprocal rank or HITS@N value with n = 1, 3, 10 for each test triple is used as paired data.",
"The symbol \" * \" indicates a significance level of p < 0.05.",
"The results demonstrate that imposing the nonnegativity and approximate entailment constraints indeed improves KG embedding.",
"ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18).",
"More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplEx R , and even the complicated developments or implementations like ConvE or Single DistMult.",
"This demonstrates the superiority of our approach.",
"Analysis on Entity Representations This section inspects how the structure of the entity embedding space changes when the constraints are imposed.",
"We first provide the visualization of entity representations on DB100K.",
"On this dataset each entity is associated with a single type label.",
"11 We pick 4 types reptile, wine region, species, and programming language, and randomly select 30 entities from each type.",
"Figure 1 visualizes with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated.",
"Then we investigate the semantic purity of these dimensions.",
"Specifically, we collect the representations of all the entities on DB100K (real components only).",
"For each dimension of these representations, top K percent of entities with the highest activation values on this dimension are picked.",
"We can calculate the entropy of the type distribution of the entities selected.",
"This entropy reflects diversity of entity types, or in other words, semantic purity.",
"If all the K percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity).",
"On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity).",
"Figure 2 AER, as K varies.",
"We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity.",
"We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena.",
"The results are given as supplementary material.",
"Analysis on Relation Representations This section further provides a visual inspection of the relation embedding space when the constraints are imposed.",
"To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.",
"12 We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure 3 , where for each relation we randomly pick 5 dimensions from both its real and imaginary components.",
"By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well.",
"Pairs of relations from the first class (equivalence) tend to have identical representations r p ≈ r q , those from the second class (inversion) complex conjugate representations r p ≈r q ; and the others representations that Re(r p ) ≤ Re(r q ) and Im(r p ) ≈ Im(r q ).",
"12 Equivalence and inversion are detected using heuristics introduced in § 4.2 (implementation details).",
"See the supplementary material for detailed properties of these three classes.",
"Conclusion This paper investigates the potential of using very simple constraints to improve KG embedding.",
"Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations.",
"Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity.",
"Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines.",
"The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space.",
"Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek.",
"2014"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Our Approach",
"A Basic Embedding Model",
"Non-negativity of Entity Representations",
"Approximate Entailment for Relations",
"The Overall Model",
"Experiments and Results",
"Datasets",
"Link Prediction",
"Analysis on Entity Representations",
"Analysis on Relation Representations",
"Conclusion"
]
} | GEM-SciDuet-train-121#paper-1331#slide-15 | Analysis on entity representations | Visualization of entity representations
Pick 4 types reptile/wine region /species/programming language, and randomly select 30 entities from each type
Visualize the representations of these entities learned by
Compact and interpretable entity representations
Each entity is represented by only a relatively small number of active dimensions
Entities with the same type tend to activate the same set of dimensions | Visualization of entity representations
Pick 4 types reptile/wine region /species/programming language, and randomly select 30 entities from each type
Visualize the representations of these entities learned by
Compact and interpretable entity representations
Each entity is represented by only a relatively small number of active dimensions
Entities with the same type tend to activate the same set of dimensions | [] |
GEM-SciDuet-train-121#paper-1331#slide-16 | 1331 | Improving Knowledge Graph Embedding Using Simple Constraints | Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239
],
"paper_content_text": [
"Introduction The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), e.g., Freebase (Bollacker et al., 2008) , DBpedia (Lehmann et al., 2015) , and Google's Knowledge * Corresponding author: Quan Wang.",
"Vault (Dong et al., 2014) .",
"A typical KG is a multirelational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form (head entity, relation, tail entity).",
"Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks (Wasserman-Pritsker et al., 2015; Hoffmann et al., 2011; Yang and Mitchell, 2017) .",
"Recently, the concept of knowledge graph embedding has been presented and quickly become a hot research topic.",
"The key idea there is to embed components of a KG (i.e., entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG.",
"Early works on this topic learned such vectorial representations (i.e., embeddings) via just simple models developed over KG triples (Bordes et al., 2011 (Bordes et al., , 2013 Jenatton et al., 2012; Nickel et al., 2011) .",
"Recent attempts focused on either designing more complicated triple scoring models (Socher et al., 2013; Bordes et al., 2014; Wang et al., 2014; Lin et al., 2015b; Xiao et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , or incorporating extra information beyond KG triples (Chang et al., 2014; Zhong et al., 2015; Lin et al., 2015a; Neelakantan et al., 2015; Luo et al., 2015b; Xie et al., 2016a,b; Xiao et al., 2017) .",
"See (Wang et al., 2017) for a thorough review.",
"This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task.",
"Specifically, we examine two types of constraints: (i) non-negativity constraints on entity representations and (ii) approximate entailment constraints over relation representations.",
"By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability (Murphy et al., 2012) .",
"By using the latter, we further encode regularities of logical entailment between relations into their distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction (Rocktäschel et al., 2015; Guo et al., 2016) .",
"These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity.",
"Our work has some similarities to those which integrate logical background knowledge into KG embedding (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"Most of such works, however, need grounding of first-order logic rules.",
"The grounding process could be time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"(2016) tried to model rules using only relation representations.",
"But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities.",
"Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create.",
"Minervini et al.",
"(2017b) proposed adversarial training which can integrate first-order logic rules without grounding.",
"But their work, again, focuses on strict, hard rules.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of rules.",
"But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG.",
"Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, i.e., non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities.",
"We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well.",
"Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability.",
"The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space.",
"The remainder of this paper is organized as follows.",
"We first review related work in Section 2, and then detail our approach in Section 3.",
"Experiments and results are reported in Section 4, followed by concluding remarks in Section 5.",
"Related Work Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a.",
"KG embedding.",
"Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, e.g., TransE which takes relations as translating operations between head and tail entities (Bordes et al., 2013) , and RESCAL which models triples through bilinear operations over entity and relation representations (Nickel et al., 2011) .",
"Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, e.g., the TransE extensions (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015) , the RESCAL extensions (Yang et al., 2015; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , and the (deep) neural network models (Socher et al., 2013; Bordes et al., 2014; Shi and Weninger, 2017; Schlichtkrull et al., 2017; Dettmers et al., 2018) ; (ii) those which tried to integrate extra information beyond triples, e.g., entity types Xie et al., 2016b) , relation paths (Neelakantan et al., 2015; Lin et al., 2015a) , and textual descriptions (Xie et al., 2016a; Xiao et al., 2017) .",
"Please refer to (Nickel et al., 2016a; Wang et al., 2017) for a thorough review of these techniques.",
"In this paper, we show the potential of using very simple constraints (i.e., nonnegativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity.",
"A line of research related to ours is KG embedding with logical background knowledge incorporated (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"Both works, however, can only handle strict, hard rules which usually require extensive effort to create.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of background knowledge.",
"But their work considers only equivalence and inversion between relations, which might not always be available in a given KG.",
"Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding.",
"And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs.",
"Non-negativity has long been a subject studied in various research fields.",
"Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability (Lee and Seung, 1999) .",
"In many NLP-related tasks, nonnegativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition (Murphy et al., 2012; Luo et al., 2015a; Fyshe et al., 2015) .",
"In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability.",
"Our Approach This section presents our approach.",
"We first introduce a basic embedding technique to model triples in a given KG ( § 3.1).",
"Then we discuss the nonnegativity constraints over entity representations ( § 3.2) and the approximate entailment constraints over relation representations ( § 3.3) .",
"And finally we present the overall model ( § 3.4).",
"A Basic Embedding Model We choose ComplEx (Trouillon et al., 2016) as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance.",
"Specifically, suppose we are given a KG containing a set of triples O = {(e i , r k , e j )}, with each triple composed of two entities e i , e j ∈ E and their relation r k ∈ R. Here E is the set of entities and R the set of relations.",
"ComplEx then represents each entity e ∈ E as a complex-valued vector e ∈ C d , and each relation r ∈ R a complex-valued vector r ∈ C d , where d is the dimensionality of the embedding space.",
"Each x ∈ C d consists of a real vector component Re(x) and an imaginary vector component Im(x), i.e., x = Re(x) + iIm(x).",
"For any given triple (e i , r k , e j ) ∈ E × R × E, a multilinear dot product is used to score that triple, i.e., φ(e i , r k , e j ) Re( e i , r k ,ē j ) Re( [e i ] [r k ] [ē j ] ), (1) where e i , r k , e j ∈ C d are the vectorial representations associated with e i , r k , e j , respectively;ē j is the conjugate of e j ; [·] is the -th entry of a vector; and Re(·) means taking the real part of a complex value.",
"Triples with higher φ(·, ·, ·) scores are more likely to be true.",
"Owing to the asymmetry of this scoring function, i.e., φ(e i , r k , e j ) = φ(e j , r k , e i ), ComplEx can effectively handle asymmetric relations (Trouillon et al., 2016) .",
"Non-negativity of Entity Representations On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations.",
"In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions.",
"In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation.",
"However, as pointed out by Murphy et al.",
"(2012) , it would be uneconomical to store all negative properties of an entity or a concept.",
"For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication.",
"Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations.",
"To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of [0, 1] d , as approximately Boolean embeddings (Kruszewski et al., 2015) , i.e., 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (2) where e ∈ C d is the representation for entity e ∈ E, with its real and imaginary components denoted by Re(e), Im(e) ∈ R d ; 0 and 1 are d-dimensional vectors with all their entries being 0 or 1; and ≥, ≤ , = denote the entry-wise comparisons throughout the paper whenever applicable.",
"As shown by Lee and Seung (1999) , non-negativity, in most cases, will further induce sparsity and interpretability.",
"Approximate Entailment for Relations Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations.",
"By approximate entailment, we mean an ordered pair of relations that the former approximately entails the latter, e.g., BornInCountry and Nationality, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country.",
"Each such relation pair is associated with a weight to indicate the confidence level of entailment.",
"A larger weight stands for a higher level of confidence.",
"We denote by r p λ − → r q the approximate entailment between relations r p and r q , with confidence level λ.",
"This kind of entailment can be derived automatically from a KG by modern rule mining systems (Galárraga et al., 2015) .",
"Let T denote the set of all such approximate entailments derived beforehand.",
"Before diving into approximate entailment, we first explore the modeling of strict entailment, i.e., entailment with infinite confidence level λ = +∞.",
"The strict entailment r p → r q states that if relation r p holds then relation r q must also hold.",
"This entailment can be roughly modelled by requiring φ(e i , r p , e j ) ≤ φ(e i , r q , e j ), ∀e i , e j ∈ E, (3) where φ(·, ·, ·) is the score for a triple predicted by the embedding model, defined by Eq.",
"(1).",
"Eq.",
"(3) can be interpreted as follows: for any two entities e i and e j , if (e i , r p , e j ) is a true fact with a high score φ(e i , r p , e j ), then the triple (e i , r q , e j ) with an even higher score should also be predicted as a true fact by the embedding model.",
"Note that given the non-negativity constraints defined by Eq.",
"(2), a sufficient condition for Eq.",
"(3) to hold, is to further impose Re(r p ) ≤ Re(r q ), Im(r p ) = Im(r q ), (4) where r p and r q are the complex-valued representations for r p and r q respectively, with the real and imaginary components denoted by Re(·), Im(·) ∈ R d .",
"That means, when the constraints of Eq.",
"(4) (along with those of Eq.",
"(2)) are satisfied, the requirement of Eq.",
"(3) (or in other words r p → r q ) will always hold.",
"We provide a proof of sufficiency as supplementary material.",
"Next we examine the modeling of approximate entailment.",
"To this end, we further introduce the confidence level λ and allow slackness in Eq.",
"(4), which yields λ Re(r p ) − Re(r q ) ≤ α, (5) λ Im(r p ) − Im(r q ) 2 ≤ β.",
"(6) Here α, β ≥ 0 are slack variables, and (·) 2 means an entry-wise operation.",
"Entailments with higher confidence levels show less tolerance for violating the constraints.",
"When λ = +∞, Eqs.",
"(5) -(6) degenerate to Eq.",
"(4).",
"The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible (e i , e j ) entity pairs (i.e., grounding).",
"In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty.",
"The Overall Model Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations.",
"The overall model is presented as follows: min Θ,{α,β} D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T 1 (α + β) + η Θ 2 2 , s.t.",
"λ Re(r p ) − Re(r q ) ≤ α, λ Im(r p ) − Im(r q ) 2 ≤ β, α, β ≥ 0, ∀r p λ − → r q ∈ T , 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E. (7) Here, Θ {e : e ∈ E} ∪ {r : r ∈ R} is the set of all entity and relation representations; D + and D − are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, i.e., (e i , r k , e j ) ∈ O; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, i.e., (e i , r k , e j ) or (e i , r k , e j ); y ijk = ±1 is the label (positive or negative) of triple (e i , r k , e j ).",
"In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels.",
"The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient µ ≥ 0.",
"The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied.",
"The last term is L 2 regularization to avoid over-fitting, and η ≥ 0 is the regularization coefficient.",
"To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are.",
"As such, the optimization problem of Eq.",
"(7) can be rewritten as: min Θ D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T λ1 Re(r p )−Re(r q ) + + µ T λ1 Im(r p )−Im(r q ) 2 + η Θ 2 2 , s.t.",
"0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (8) where [x] + = max(0, x) with max(·, ·) being an entry-wise operation.",
"The equivalence between Eq.",
"(7) and Eq.",
"(8) is shown in the supplementary material.",
"We use SGD in mini-batch mode as our optimizer, with AdaGrad (Duchi et al., 2011) to tune the learning rate.",
"After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of [0, 1] d , to satisfy the non-negativity constraints.",
"While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity.",
"Our approach has a space complexity of O(nd + md), which is the same as that of ComplEx.",
"Here, n is the number of entities, m the number of relations, and O(nd + md) to store a d-dimensional complex-valued vector for each entity and each relation.",
"The time complexity (per iteration) of our approach is O(sd+td+nd), where s is the average number of triples in a mini-batch,n the average number of entities in a mini-batch, and t the total number of approximate entailments in T .",
"O(sd) is to handle triples in a mini-batch, O(td) penalty terms introduced by the approximate entailments, and O(nd) further the non-negativity constraints on entity representations.",
"Usually there are much fewer entailments than triples, i.e., t s, and alsō n ≤ 2s.",
"1 So the time complexity of our approach is on a par with O(sd), i.e., the time complexity of ComplEx.",
"Experiments and Results This section presents our experiments and results.",
"We first introduce the datasets used in our experiments ( § 4.1).",
"Then we empirically evaluate our approach in the link prediction task ( § 4.2).",
"After that, we conduct extensive analysis on both entity representations ( § 4.3) and relation representations ( § 4.4) to show the interpretability of our model.",
"1 There will be at most 2s entities contained in s triples.",
"Code and data used in the experiments are available at https://github.com/iieir-km/ ComplEx-NNE_AER.",
"Datasets The first two datasets we used are WN18 and F-B15K, released by Bordes et al.",
"(2013) .",
"2 WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities.",
"We create our third dataset from the mapping-based objects of core DBpedia.",
"3 We eliminate relations not included within the DBpedia ontology such as HomePage and Logo, and discard entities appearing less than 20 times.",
"The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities.",
"Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively.",
"We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K.",
"We further use AMIE+ (Galárraga et al., 2015) 4 to extract approximate entailments automatically from the training set of each dataset.",
"As suggested by Guo et al.",
"(2018) , we consider entailments with PCA confidence higher than 0.8.",
"5 As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K.",
"Table 1 gives some examples of these approximate entailments, along with their confidence levels.",
"Table 2 further summarizes the statistics of the datasets.",
"Link Prediction We first evaluate our approach in the link prediction task, which aims to predict a triple (e i , r k , e j ) with e i or e j missing, i.e., predict e i given (r k , e j ) or predict e j given (e i , r k ).",
"Evaluation Protocol: We follow the protocol introduced by Bordes et al.",
"(2013) .",
"For each test triple (e i , r k , e j ), we replace its head entity e i with every entity e i ∈ E, and calculate a score for the corrupted triple (e i , r k , e j ), e.g., φ(e i , r k , e j ) defined by Eq.",
"(1).",
"Then we sort these scores in de- scending order, and get the rank of the correct entity e i .",
"During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, i.e., the filtered setting as described in (Bordes et al., 2013) .",
"This whole procedure is repeated while replacing the tail entity e j .",
"We report on the test set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (HITS@N), with n = 1, 3, 10.",
"Comparison Settings: We compare the performance of our approach against a variety of KG embedding models developed in recent years.",
"These models can be categorized into three groups: • Simple embedding models that utilize triples alone without integrating extra information, including TransE (Bordes et al., 2013) , Dist-Mult (Yang et al., 2015) , HolE (Nickel et al., 2016b) , ComplEx (Trouillon et al., 2016) , and ANALOGY (Liu et al., 2017) .",
"Our approach is developed on the basis of ComplEx.",
"• Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE (Guo et al., 2018) and ComplEx R (Minervini et al., 2017a) .",
"The former requires grounding of first-order logic rules.",
"The latter is restricted to relation equiv-alence and inversion, and assigns an identical confidence level to all different rules.",
"• Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and F-B15K, including R-GCN (Schlichtkrull et al., 2017) , ConvE (Dettmers et al., 2018) , and Single DistMult (Kadlec et al., 2017) .",
"6 The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models.",
"The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time.",
"We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the Non-Negativity constraints on Entity representations, i.e., optimization Eq.",
"(8) with µ = 0; and (ii) ComplEx-NNE+AER that further imposes the Approximate Entailment constraints over Relation representations besides those non-negativity ones, i.e., optimization Eq.",
"(8) with µ > 0.",
"Implementation Details: We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K.",
"We directly report their original results on these two datasets to avoid re-implementation bias.",
"On DB100K, the newly created dataset, we take the first two groups of baselines, i.e., those simple embedding models and ComplEx extensions with logical background knowledge incorporated.",
"We do not use the third group of baselines due to efficiency and complexity issues.",
"We use the code provided by Trouillon et al.",
"(2016) 7 for TransE, DistMult, and ComplEx, and the code released by their authors for ANAL-OGY 8 and RUGE 9 .",
"We re-implement HolE and ComplEx R so that all the baselines (as well as our approach) share the same optimization mode, i.e., SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.",
"10 We follow Trouillon et al.",
"(2016) to adopt a ranking loss for TransE and a logistic loss for all the other methods.",
"(Trouillon et al., 2016) .",
"Results for the other baselines are taken from the original papers.",
"Missing scores not reported in the literature are indicated by \"-\".",
"Best scores are highlighted in bold, and \" * \" indicates statistically significant improvements over ComplEx.",
"Table 4 : Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by \" * \".",
"Among those baselines, RUGE and ComplEx R require additional logical background knowledge.",
"RUGE makes use of soft rules, which are extracted by AMIE+ from the training sets.",
"As suggested by Guo et al.",
"(2018) , length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized.",
"Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8.",
"But it only considers entailments between a pair of relations, i.e., length-1 rules.",
"ComplEx R takes into account equivalence and inversion between relations.",
"We derive such axioms directly from our approximate entailments.",
"If r p λ 1 − → r q and r q λ 2 − → r p with λ 1 , λ 2 > 0.8, we think relations r p and r q are equivalent.",
"And similarly, if r −1 p λ 1 − → r q and r −1 q λ 2 − → r p with λ 1 , λ 2 > 0.8, we consider r p as an inverse of r q .",
"For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set.",
"Specifically, we tune the embedding size d ∈ {100, 150, 200}, the L 2 regularization coefficient η ∈ {0.001, 0.003, 0.01, 0.03, 0.1}, the ratio of negative over positive training examples α ∈ {2, 10}, and the initial learning rate γ ∈ {0.01, 0.05, 0.1, 0.5, 1.0}.",
"For TransE, we tune the margin of the ranking loss δ ∈ {0.1, 0.2, 0.5, 1, 2, 5, 10}.",
"Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors (Liu et al., 2017; Guo et al., 2018) .",
"After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER (µ in Eq.",
"(8)) in the range of {10 −5 , 10 −4 , · · · , 10 4 , 10 5 }, with all its other hyperparameters fixed to their optimal configurations.",
"We then directly set µ = 0 to get the optimal ComplEx-NNE model.",
"The weight of soft constraints in ComplEx R is tuned in the same range as µ.",
"The optimal configurations for our approach are: d = 200, η = 0.03, α = 10, γ = 1.0, µ = 10 on WN18; d = 200, η = 0.01, α = 10, γ = 0.5, µ = 10 −3 on FB15K; and d = 150, η = 0.03, α = 10, γ = 0.1, µ = 10 −5 on DB100K.",
"Experimental Results: Table 3 presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from previous literature.",
"Table 4 further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting.",
"On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test.",
"The reciprocal rank or HITS@N value with n = 1, 3, 10 for each test triple is used as paired data.",
"The symbol \" * \" indicates a significance level of p < 0.05.",
"The results demonstrate that imposing the nonnegativity and approximate entailment constraints indeed improves KG embedding.",
"ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18).",
"More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplEx R , and even the complicated developments or implementations like ConvE or Single DistMult.",
"This demonstrates the superiority of our approach.",
"Analysis on Entity Representations This section inspects how the structure of the entity embedding space changes when the constraints are imposed.",
"We first provide the visualization of entity representations on DB100K.",
"On this dataset each entity is associated with a single type label.",
"11 We pick 4 types reptile, wine region, species, and programming language, and randomly select 30 entities from each type.",
"Figure 1 visualizes with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated.",
"Then we investigate the semantic purity of these dimensions.",
"Specifically, we collect the representations of all the entities on DB100K (real components only).",
"For each dimension of these representations, top K percent of entities with the highest activation values on this dimension are picked.",
"We can calculate the entropy of the type distribution of the entities selected.",
"This entropy reflects diversity of entity types, or in other words, semantic purity.",
"If all the K percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity).",
"On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity).",
"Figure 2 AER, as K varies.",
"We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity.",
"We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena.",
"The results are given as supplementary material.",
"Analysis on Relation Representations This section further provides a visual inspection of the relation embedding space when the constraints are imposed.",
"To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.",
"12 We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure 3 , where for each relation we randomly pick 5 dimensions from both its real and imaginary components.",
"By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well.",
"Pairs of relations from the first class (equivalence) tend to have identical representations r p ≈ r q , those from the second class (inversion) complex conjugate representations r p ≈r q ; and the others representations that Re(r p ) ≤ Re(r q ) and Im(r p ) ≈ Im(r q ).",
"12 Equivalence and inversion are detected using heuristics introduced in § 4.2 (implementation details).",
"See the supplementary material for detailed properties of these three classes.",
"Conclusion This paper investigates the potential of using very simple constraints to improve KG embedding.",
"Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations.",
"Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity.",
"Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines.",
"The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space.",
"Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek.",
"2014"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Our Approach",
"A Basic Embedding Model",
"Non-negativity of Entity Representations",
"Approximate Entailment for Relations",
"The Overall Model",
"Experiments and Results",
"Datasets",
"Link Prediction",
"Analysis on Entity Representations",
"Analysis on Relation Representations",
"Conclusion"
]
} | GEM-SciDuet-train-121#paper-1331#slide-16 | Analysis on entity representations cont | Semantic purity of latent dimensions
For each latent dimension, pick top K percent of entities with the highest activation values on this dimension
Calculate the entropy of the type distribution of these entities
Latent dimensions with higher semantic purity
A lower entropy means entities along this dimension tend to have the same type | Semantic purity of latent dimensions
For each latent dimension, pick top K percent of entities with the highest activation values on this dimension
Calculate the entropy of the type distribution of these entities
Latent dimensions with higher semantic purity
A lower entropy means entities along this dimension tend to have the same type | [] |
GEM-SciDuet-train-121#paper-1331#slide-17 | 1331 | Improving Knowledge Graph Embedding Using Simple Constraints | Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239
],
"paper_content_text": [
"Introduction The past decade has witnessed great achievements in building web-scale knowledge graphs (KGs), e.g., Freebase (Bollacker et al., 2008) , DBpedia (Lehmann et al., 2015) , and Google's Knowledge * Corresponding author: Quan Wang.",
"Vault (Dong et al., 2014) .",
"A typical KG is a multirelational graph composed of entities as nodes and relations as different types of edges, where each edge is represented as a triple of the form (head entity, relation, tail entity).",
"Such KGs contain rich structured knowledge, and have proven useful for many NLP tasks (Wasserman-Pritsker et al., 2015; Hoffmann et al., 2011; Yang and Mitchell, 2017) .",
"Recently, the concept of knowledge graph embedding has been presented and quickly become a hot research topic.",
"The key idea there is to embed components of a KG (i.e., entities and relations) into a continuous vector space, so as to simplify manipulation while preserving the inherent structure of the KG.",
"Early works on this topic learned such vectorial representations (i.e., embeddings) via just simple models developed over KG triples (Bordes et al., 2011 (Bordes et al., , 2013 Jenatton et al., 2012; Nickel et al., 2011) .",
"Recent attempts focused on either designing more complicated triple scoring models (Socher et al., 2013; Bordes et al., 2014; Wang et al., 2014; Lin et al., 2015b; Xiao et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , or incorporating extra information beyond KG triples (Chang et al., 2014; Zhong et al., 2015; Lin et al., 2015a; Neelakantan et al., 2015; Luo et al., 2015b; Xie et al., 2016a,b; Xiao et al., 2017) .",
"See (Wang et al., 2017) for a thorough review.",
"This paper, by contrast, investigates the potential of using very simple constraints to improve the KG embedding task.",
"Specifically, we examine two types of constraints: (i) non-negativity constraints on entity representations and (ii) approximate entailment constraints over relation representations.",
"By using the former, we learn compact representations for entities, which would naturally induce sparsity and interpretability (Murphy et al., 2012) .",
"By using the latter, we further encode regularities of logical entailment between relations into their distributed representations, which might be advantageous to downstream tasks like link prediction and relation extraction (Rocktäschel et al., 2015; Guo et al., 2016) .",
"These constraints impose prior beliefs upon the structure of the embedding space, and will help us to learn more predictive embeddings, without significantly increasing the space or time complexity.",
"Our work has some similarities to those which integrate logical background knowledge into KG embedding (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"Most of such works, however, need grounding of first-order logic rules.",
"The grounding process could be time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"(2016) tried to model rules using only relation representations.",
"But their work creates vector representations for entity pairs rather than individual entities, and hence fails to handle unpaired entities.",
"Moreover, it can only incorporate strict, hard rules which usually require extensive manual effort to create.",
"Minervini et al.",
"(2017b) proposed adversarial training which can integrate first-order logic rules without grounding.",
"But their work, again, focuses on strict, hard rules.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of rules.",
"But their work assigns to different rules a same confidence level, and considers only equivalence and inversion of relations, which might not always be available in a given KG.",
"Our approach differs from the aforementioned works in that: (i) it imposes constraints directly on entity and relation representations without grounding, and can easily scale up to large KGs; (ii) the constraints, i.e., non-negativity and approximate entailment derived automatically from statistical properties, are quite universal, requiring no manual effort and applicable to almost all KGs; (iii) it learns an individual representation for each entity, and can successfully make predictions between unpaired entities.",
"We evaluate our approach on publicly available KGs of WordNet, Freebase, and DBpedia as well.",
"Experimental results indicate that our approach is simple yet surprisingly effective, achieving significant and consistent improvements over competitive baselines, but without negative impacts on efficiency or scalability.",
"The non-negativity and approximate entailment constraints indeed improve model interpretability, resulting in a substantially increased structuring of the embedding space.",
"The remainder of this paper is organized as follows.",
"We first review related work in Section 2, and then detail our approach in Section 3.",
"Experiments and results are reported in Section 4, followed by concluding remarks in Section 5.",
"Related Work Recent years have seen growing interest in learning distributed representations for entities and relations in KGs, a.k.a.",
"KG embedding.",
"Early works on this topic devised very simple models to learn such distributed representations, solely on the basis of triples observed in a given KG, e.g., TransE which takes relations as translating operations between head and tail entities (Bordes et al., 2013) , and RESCAL which models triples through bilinear operations over entity and relation representations (Nickel et al., 2011) .",
"Later attempts roughly fell into two groups: (i) those which tried to design more complicated triple scoring models, e.g., the TransE extensions (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015) , the RESCAL extensions (Yang et al., 2015; Nickel et al., 2016b; Trouillon et al., 2016; Liu et al., 2017) , and the (deep) neural network models (Socher et al., 2013; Bordes et al., 2014; Shi and Weninger, 2017; Schlichtkrull et al., 2017; Dettmers et al., 2018) ; (ii) those which tried to integrate extra information beyond triples, e.g., entity types Xie et al., 2016b) , relation paths (Neelakantan et al., 2015; Lin et al., 2015a) , and textual descriptions (Xie et al., 2016a; Xiao et al., 2017) .",
"Please refer to (Nickel et al., 2016a; Wang et al., 2017) for a thorough review of these techniques.",
"In this paper, we show the potential of using very simple constraints (i.e., nonnegativity constraints and approximate entailment constraints) to improve KG embedding, without significantly increasing the model complexity.",
"A line of research related to ours is KG embedding with logical background knowledge incorporated (Rocktäschel et al., 2015; Guo et al., 2016 Guo et al., , 2018 .",
"But most of such works require grounding of first-order logic rules, which is time and space inefficient especially for complicated rules.",
"To avoid grounding, Demeester et al.",
"Both works, however, can only handle strict, hard rules which usually require extensive effort to create.",
"Minervini et al.",
"(2017a) tried to handle uncertainty of background knowledge.",
"But their work considers only equivalence and inversion between relations, which might not always be available in a given KG.",
"Our approach, in contrast, imposes constraints directly on entity and relation representations without grounding.",
"And the constraints used are quite universal, requiring no manual effort and applicable to almost all KGs.",
"Non-negativity has long been a subject studied in various research fields.",
"Previous studies reveal that non-negativity could naturally induce sparsity and, in most cases, better interpretability (Lee and Seung, 1999) .",
"In many NLP-related tasks, nonnegativity constraints are introduced to learn more interpretable word representations, which capture the notion of semantic composition (Murphy et al., 2012; Luo et al., 2015a; Fyshe et al., 2015) .",
"In this paper, we investigate the ability of non-negativity constraints to learn more accurate KG embeddings with good interpretability.",
"Our Approach This section presents our approach.",
"We first introduce a basic embedding technique to model triples in a given KG ( § 3.1).",
"Then we discuss the nonnegativity constraints over entity representations ( § 3.2) and the approximate entailment constraints over relation representations ( § 3.3) .",
"And finally we present the overall model ( § 3.4).",
"A Basic Embedding Model We choose ComplEx (Trouillon et al., 2016) as our basic embedding model, since it is simple and efficient, achieving state-of-the-art predictive performance.",
"Specifically, suppose we are given a KG containing a set of triples O = {(e i , r k , e j )}, with each triple composed of two entities e i , e j ∈ E and their relation r k ∈ R. Here E is the set of entities and R the set of relations.",
"ComplEx then represents each entity e ∈ E as a complex-valued vector e ∈ C d , and each relation r ∈ R a complex-valued vector r ∈ C d , where d is the dimensionality of the embedding space.",
"Each x ∈ C d consists of a real vector component Re(x) and an imaginary vector component Im(x), i.e., x = Re(x) + iIm(x).",
"For any given triple (e i , r k , e j ) ∈ E × R × E, a multilinear dot product is used to score that triple, i.e., φ(e i , r k , e j ) Re( e i , r k ,ē j ) Re( [e i ] [r k ] [ē j ] ), (1) where e i , r k , e j ∈ C d are the vectorial representations associated with e i , r k , e j , respectively;ē j is the conjugate of e j ; [·] is the -th entry of a vector; and Re(·) means taking the real part of a complex value.",
"Triples with higher φ(·, ·, ·) scores are more likely to be true.",
"Owing to the asymmetry of this scoring function, i.e., φ(e i , r k , e j ) = φ(e j , r k , e i ), ComplEx can effectively handle asymmetric relations (Trouillon et al., 2016) .",
"Non-negativity of Entity Representations On top of the basic ComplEx model, we further require entities to have non-negative (and bounded) vectorial representations.",
"In fact, these distributed representations can be taken as feature vectors for entities, with latent semantics encoded in different dimensions.",
"In ComplEx, as well as most (if not all) previous approaches, there is no limitation on the range of such feature values, which means that both positive and negative properties of an entity can be encoded in its representation.",
"However, as pointed out by Murphy et al.",
"(2012) , it would be uneconomical to store all negative properties of an entity or a concept.",
"For instance, to describe cats (a concept), people usually use positive properties such as cats are mammals, cats eat fishes, and cats have four legs, but hardly ever negative properties like cats are not vehicles, cats do not have wheels, or cats are not used for communication.",
"Based on such intuition, this paper proposes to impose non-negativity constraints on entity representations, by using which only positive properties will be stored in these representations.",
"To better compare different entities on the same scale, we further require entity representations to stay within the hypercube of [0, 1] d , as approximately Boolean embeddings (Kruszewski et al., 2015) , i.e., 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (2) where e ∈ C d is the representation for entity e ∈ E, with its real and imaginary components denoted by Re(e), Im(e) ∈ R d ; 0 and 1 are d-dimensional vectors with all their entries being 0 or 1; and ≥, ≤ , = denote the entry-wise comparisons throughout the paper whenever applicable.",
"As shown by Lee and Seung (1999) , non-negativity, in most cases, will further induce sparsity and interpretability.",
"Approximate Entailment for Relations Besides the non-negativity constraints over entity representations, we also study approximate entailment constraints over relation representations.",
"By approximate entailment, we mean an ordered pair of relations that the former approximately entails the latter, e.g., BornInCountry and Nationality, stating that a person born in a country is very likely, but not necessarily, to have a nationality of that country.",
"Each such relation pair is associated with a weight to indicate the confidence level of entailment.",
"A larger weight stands for a higher level of confidence.",
"We denote by r p λ − → r q the approximate entailment between relations r p and r q , with confidence level λ.",
"This kind of entailment can be derived automatically from a KG by modern rule mining systems (Galárraga et al., 2015) .",
"Let T denote the set of all such approximate entailments derived beforehand.",
"Before diving into approximate entailment, we first explore the modeling of strict entailment, i.e., entailment with infinite confidence level λ = +∞.",
"The strict entailment r p → r q states that if relation r p holds then relation r q must also hold.",
"This entailment can be roughly modelled by requiring φ(e i , r p , e j ) ≤ φ(e i , r q , e j ), ∀e i , e j ∈ E, (3) where φ(·, ·, ·) is the score for a triple predicted by the embedding model, defined by Eq.",
"(1).",
"Eq.",
"(3) can be interpreted as follows: for any two entities e i and e j , if (e i , r p , e j ) is a true fact with a high score φ(e i , r p , e j ), then the triple (e i , r q , e j ) with an even higher score should also be predicted as a true fact by the embedding model.",
"Note that given the non-negativity constraints defined by Eq.",
"(2), a sufficient condition for Eq.",
"(3) to hold, is to further impose Re(r p ) ≤ Re(r q ), Im(r p ) = Im(r q ), (4) where r p and r q are the complex-valued representations for r p and r q respectively, with the real and imaginary components denoted by Re(·), Im(·) ∈ R d .",
"That means, when the constraints of Eq.",
"(4) (along with those of Eq.",
"(2)) are satisfied, the requirement of Eq.",
"(3) (or in other words r p → r q ) will always hold.",
"We provide a proof of sufficiency as supplementary material.",
"Next we examine the modeling of approximate entailment.",
"To this end, we further introduce the confidence level λ and allow slackness in Eq.",
"(4), which yields λ Re(r p ) − Re(r q ) ≤ α, (5) λ Im(r p ) − Im(r q ) 2 ≤ β.",
"(6) Here α, β ≥ 0 are slack variables, and (·) 2 means an entry-wise operation.",
"Entailments with higher confidence levels show less tolerance for violating the constraints.",
"When λ = +∞, Eqs.",
"(5) -(6) degenerate to Eq.",
"(4).",
"The above analysis indicates that our approach can model entailment simply by imposing constraints over relation representations, without traversing all possible (e i , e j ) entity pairs (i.e., grounding).",
"In addition, different confidence levels are encoded in the constraints, making our approach moderately tolerant of uncertainty.",
"The Overall Model Finally, we combine together the basic embedding model of ComplEx, the non-negativity constraints on entity representations, and the approximate entailment constraints over relation representations.",
"The overall model is presented as follows: min Θ,{α,β} D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T 1 (α + β) + η Θ 2 2 , s.t.",
"λ Re(r p ) − Re(r q ) ≤ α, λ Im(r p ) − Im(r q ) 2 ≤ β, α, β ≥ 0, ∀r p λ − → r q ∈ T , 0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E. (7) Here, Θ {e : e ∈ E} ∪ {r : r ∈ R} is the set of all entity and relation representations; D + and D − are the sets of positive and negative training triples respectively; a positive triple is directly observed in the KG, i.e., (e i , r k , e j ) ∈ O; a negative triple can be generated by randomly corrupting the head or the tail entity of a positive triple, i.e., (e i , r k , e j ) or (e i , r k , e j ); y ijk = ±1 is the label (positive or negative) of triple (e i , r k , e j ).",
"In this optimization, the first term of the objective function is a typical logistic loss, which enforces triples to have scores close to their labels.",
"The second term is the sum of slack variables in the approximate entailment constraints, with a penalty coefficient µ ≥ 0.",
"The motivation is, although we allow slackness in those constraints we hope the total slackness to be small, so that the constraints can be better satisfied.",
"The last term is L 2 regularization to avoid over-fitting, and η ≥ 0 is the regularization coefficient.",
"To solve this optimization problem, the approximate entailment constraints (as well as the corresponding slack variables) are converted into penalty terms and added to the objective function, while the non-negativity constraints remain as they are.",
"As such, the optimization problem of Eq.",
"(7) can be rewritten as: min Θ D + ∪D − log 1 + exp(−y ijk φ(e i , r k , e j )) + µ T λ1 Re(r p )−Re(r q ) + + µ T λ1 Im(r p )−Im(r q ) 2 + η Θ 2 2 , s.t.",
"0 ≤ Re(e), Im(e) ≤ 1, ∀e ∈ E, (8) where [x] + = max(0, x) with max(·, ·) being an entry-wise operation.",
"The equivalence between Eq.",
"(7) and Eq.",
"(8) is shown in the supplementary material.",
"We use SGD in mini-batch mode as our optimizer, with AdaGrad (Duchi et al., 2011) to tune the learning rate.",
"After each gradient descent step, we project (by truncation) real and imaginary components of entity representations into the hypercube of [0, 1] d , to satisfy the non-negativity constraints.",
"While favouring a better structuring of the embedding space, imposing the additional constraints will not substantially increase model complexity.",
"Our approach has a space complexity of O(nd + md), which is the same as that of ComplEx.",
"Here, n is the number of entities, m the number of relations, and O(nd + md) to store a d-dimensional complex-valued vector for each entity and each relation.",
"The time complexity (per iteration) of our approach is O(sd+td+nd), where s is the average number of triples in a mini-batch,n the average number of entities in a mini-batch, and t the total number of approximate entailments in T .",
"O(sd) is to handle triples in a mini-batch, O(td) penalty terms introduced by the approximate entailments, and O(nd) further the non-negativity constraints on entity representations.",
"Usually there are much fewer entailments than triples, i.e., t s, and alsō n ≤ 2s.",
"1 So the time complexity of our approach is on a par with O(sd), i.e., the time complexity of ComplEx.",
"Experiments and Results This section presents our experiments and results.",
"We first introduce the datasets used in our experiments ( § 4.1).",
"Then we empirically evaluate our approach in the link prediction task ( § 4.2).",
"After that, we conduct extensive analysis on both entity representations ( § 4.3) and relation representations ( § 4.4) to show the interpretability of our model.",
"1 There will be at most 2s entities contained in s triples.",
"Code and data used in the experiments are available at https://github.com/iieir-km/ ComplEx-NNE_AER.",
"Datasets The first two datasets we used are WN18 and F-B15K, released by Bordes et al.",
"(2013) .",
"2 WN18 is a subset of WordNet containing 18 relations and 40,943 entities, and FB15K a subset of Freebase containing 1,345 relations and 14,951 entities.",
"We create our third dataset from the mapping-based objects of core DBpedia.",
"3 We eliminate relations not included within the DBpedia ontology such as HomePage and Logo, and discard entities appearing less than 20 times.",
"The final dataset, referred to as DB100K, is composed of 470 relations and 99,604 entities.",
"Triples on each datasets are further divided into training, validation, and test sets, used for model training, hyperparameter tuning, and evaluation respectively.",
"We follow the original split for WN18 and FB15K, and draw a split of 597,572/ 50,000/50,000 triples for DB100K.",
"We further use AMIE+ (Galárraga et al., 2015) 4 to extract approximate entailments automatically from the training set of each dataset.",
"As suggested by Guo et al.",
"(2018) , we consider entailments with PCA confidence higher than 0.8.",
"5 As such, we extract 17 approximate entailments from WN18, 535 from FB15K, and 56 from DB100K.",
"Table 1 gives some examples of these approximate entailments, along with their confidence levels.",
"Table 2 further summarizes the statistics of the datasets.",
"Link Prediction We first evaluate our approach in the link prediction task, which aims to predict a triple (e i , r k , e j ) with e i or e j missing, i.e., predict e i given (r k , e j ) or predict e j given (e i , r k ).",
"Evaluation Protocol: We follow the protocol introduced by Bordes et al.",
"(2013) .",
"For each test triple (e i , r k , e j ), we replace its head entity e i with every entity e i ∈ E, and calculate a score for the corrupted triple (e i , r k , e j ), e.g., φ(e i , r k , e j ) defined by Eq.",
"(1).",
"Then we sort these scores in de- scending order, and get the rank of the correct entity e i .",
"During ranking, we remove corrupted triples that already exist in either the training, validation, or test set, i.e., the filtered setting as described in (Bordes et al., 2013) .",
"This whole procedure is repeated while replacing the tail entity e j .",
"We report on the test set the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (HITS@N), with n = 1, 3, 10.",
"Comparison Settings: We compare the performance of our approach against a variety of KG embedding models developed in recent years.",
"These models can be categorized into three groups: • Simple embedding models that utilize triples alone without integrating extra information, including TransE (Bordes et al., 2013) , Dist-Mult (Yang et al., 2015) , HolE (Nickel et al., 2016b) , ComplEx (Trouillon et al., 2016) , and ANALOGY (Liu et al., 2017) .",
"Our approach is developed on the basis of ComplEx.",
"• Other extensions of ComplEx that integrate logical background knowledge in addition to triples, including RUGE (Guo et al., 2018) and ComplEx R (Minervini et al., 2017a) .",
"The former requires grounding of first-order logic rules.",
"The latter is restricted to relation equiv-alence and inversion, and assigns an identical confidence level to all different rules.",
"• Latest developments or implementations that achieve current state-of-the-art performance reported on the benchmarks of WN18 and F-B15K, including R-GCN (Schlichtkrull et al., 2017) , ConvE (Dettmers et al., 2018) , and Single DistMult (Kadlec et al., 2017) .",
"6 The first two are built based on neural network architectures, which are, by nature, more complicated than the simple models.",
"The last one is a re-implementation of DistMult, generating 1000 to 2000 negative training examples per positive one, which leads to better performance but requires significantly longer training time.",
"We further evaluate our approach in two different settings: (i) ComplEx-NNE that imposes only the Non-Negativity constraints on Entity representations, i.e., optimization Eq.",
"(8) with µ = 0; and (ii) ComplEx-NNE+AER that further imposes the Approximate Entailment constraints over Relation representations besides those non-negativity ones, i.e., optimization Eq.",
"(8) with µ > 0.",
"Implementation Details: We compare our approach against all the three groups of baselines on the benchmarks of WN18 and FB15K.",
"We directly report their original results on these two datasets to avoid re-implementation bias.",
"On DB100K, the newly created dataset, we take the first two groups of baselines, i.e., those simple embedding models and ComplEx extensions with logical background knowledge incorporated.",
"We do not use the third group of baselines due to efficiency and complexity issues.",
"We use the code provided by Trouillon et al.",
"(2016) 7 for TransE, DistMult, and ComplEx, and the code released by their authors for ANAL-OGY 8 and RUGE 9 .",
"We re-implement HolE and ComplEx R so that all the baselines (as well as our approach) share the same optimization mode, i.e., SGD with AdaGrad and gradient normalization, to facilitate a fair comparison.",
"10 We follow Trouillon et al.",
"(2016) to adopt a ranking loss for TransE and a logistic loss for all the other methods.",
"(Trouillon et al., 2016) .",
"Results for the other baselines are taken from the original papers.",
"Missing scores not reported in the literature are indicated by \"-\".",
"Best scores are highlighted in bold, and \" * \" indicates statistically significant improvements over ComplEx.",
"Table 4 : Link prediction results on the test set of DB100K, with best scores highlighted in bold, statistically significant improvements marked by \" * \".",
"Among those baselines, RUGE and ComplEx R require additional logical background knowledge.",
"RUGE makes use of soft rules, which are extracted by AMIE+ from the training sets.",
"As suggested by Guo et al.",
"(2018) , length-1 and length-2 rules with PCA confidence higher than 0.8 are utilized.",
"Note that our approach also makes use of AMIE+ rules with PCA confidence higher than 0.8.",
"But it only considers entailments between a pair of relations, i.e., length-1 rules.",
"ComplEx R takes into account equivalence and inversion between relations.",
"We derive such axioms directly from our approximate entailments.",
"If r p λ 1 − → r q and r q λ 2 − → r p with λ 1 , λ 2 > 0.8, we think relations r p and r q are equivalent.",
"And similarly, if r −1 p λ 1 − → r q and r −1 q λ 2 − → r p with λ 1 , λ 2 > 0.8, we consider r p as an inverse of r q .",
"For all the methods, we create 100 mini-batches on each dataset, and conduct a grid search to find hyperparameters that maximize MRR on the validation set, with at most 1000 iterations over the training set.",
"Specifically, we tune the embedding size d ∈ {100, 150, 200}, the L 2 regularization coefficient η ∈ {0.001, 0.003, 0.01, 0.03, 0.1}, the ratio of negative over positive training examples α ∈ {2, 10}, and the initial learning rate γ ∈ {0.01, 0.05, 0.1, 0.5, 1.0}.",
"For TransE, we tune the margin of the ranking loss δ ∈ {0.1, 0.2, 0.5, 1, 2, 5, 10}.",
"Other hyperparameters of ANALOGY and RUGE are set or tuned according to the default settings suggested by their authors (Liu et al., 2017; Guo et al., 2018) .",
"After getting the best ComplEx model, we tune the relation constraint penalty of our approach ComplEx-NNE+AER (µ in Eq.",
"(8)) in the range of {10 −5 , 10 −4 , · · · , 10 4 , 10 5 }, with all its other hyperparameters fixed to their optimal configurations.",
"We then directly set µ = 0 to get the optimal ComplEx-NNE model.",
"The weight of soft constraints in ComplEx R is tuned in the same range as µ.",
"The optimal configurations for our approach are: d = 200, η = 0.03, α = 10, γ = 1.0, µ = 10 on WN18; d = 200, η = 0.01, α = 10, γ = 0.5, µ = 10 −3 on FB15K; and d = 150, η = 0.03, α = 10, γ = 0.1, µ = 10 −5 on DB100K.",
"Experimental Results: Table 3 presents the results on the test sets of WN18 and FB15K, where the results for the baselines are taken directly from previous literature.",
"Table 4 further provides the results on the test set of DB100K, with all the methods tuned and tested in (almost) the same setting.",
"On all the datasets, we test statistical significance of the improvements achieved by ComplEx-NNE/ ComplEx-NNE+AER over ComplEx, by using a paired t-test.",
"The reciprocal rank or HITS@N value with n = 1, 3, 10 for each test triple is used as paired data.",
"The symbol \" * \" indicates a significance level of p < 0.05.",
"The results demonstrate that imposing the nonnegativity and approximate entailment constraints indeed improves KG embedding.",
"ComplEx-NNE and ComplEx-NNE+AER perform better than (or at least equally well as) ComplEx in almost all the metrics on all the three datasets, and most of the improvements are statistically significant (except those on WN18).",
"More interestingly, just by introducing these simple constraints, ComplEx-NNE+ AER can beat very strong baselines, including the best performing basic models like ANALOGY, those previous extensions of ComplEx like RUGE or ComplEx R , and even the complicated developments or implementations like ConvE or Single DistMult.",
"This demonstrates the superiority of our approach.",
"Analysis on Entity Representations This section inspects how the structure of the entity embedding space changes when the constraints are imposed.",
"We first provide the visualization of entity representations on DB100K.",
"On this dataset each entity is associated with a single type label.",
"11 We pick 4 types reptile, wine region, species, and programming language, and randomly select 30 entities from each type.",
"Figure 1 visualizes with the same type tend to activate the same set of dimensions, while entities with different types often get clearly different dimensions activated.",
"Then we investigate the semantic purity of these dimensions.",
"Specifically, we collect the representations of all the entities on DB100K (real components only).",
"For each dimension of these representations, top K percent of entities with the highest activation values on this dimension are picked.",
"We can calculate the entropy of the type distribution of the entities selected.",
"This entropy reflects diversity of entity types, or in other words, semantic purity.",
"If all the K percent of entities have the same type, we will get the lowest entropy of zero (the highest semantic purity).",
"On the contrary, if each of them has a distinct type, we will get the highest entropy (the lowest semantic purity).",
"Figure 2 AER, as K varies.",
"We can see that after imposing the non-negativity constraints, ComplEx-NNE and ComplEx-NNE+AER can learn entity representations with latent dimensions of consistently higher semantic purity.",
"We have conducted the same analyses on imaginary components of entity representations, and observed similar phenomena.",
"The results are given as supplementary material.",
"Analysis on Relation Representations This section further provides a visual inspection of the relation embedding space when the constraints are imposed.",
"To this end, we group relation pairs involved in the DB100K entailment constraints into 3 classes: equivalence, inversion, and others.",
"12 We choose 2 pairs of relations from each class, and visualize these relation representations learned by ComplEx-NNE+AER in Figure 3 , where for each relation we randomly pick 5 dimensions from both its real and imaginary components.",
"By imposing the approximate entailment constraints, these relation representations can encode logical regularities quite well.",
"Pairs of relations from the first class (equivalence) tend to have identical representations r p ≈ r q , those from the second class (inversion) complex conjugate representations r p ≈r q ; and the others representations that Re(r p ) ≤ Re(r q ) and Im(r p ) ≈ Im(r q ).",
"12 Equivalence and inversion are detected using heuristics introduced in § 4.2 (implementation details).",
"See the supplementary material for detailed properties of these three classes.",
"Conclusion This paper investigates the potential of using very simple constraints to improve KG embedding.",
"Two types of constraints have been studied: (i) the non-negativity constraints to learn compact, interpretable entity representations, and (ii) the approximate entailment constraints to further encode logical regularities into relation representations.",
"Such constraints impose prior beliefs upon the structure of the embedding space, and will not significantly increase the space or time complexity.",
"Experimental results on benchmark KGs demonstrate that our method is simple yet surprisingly effective, showing significant and consistent improvements over strong baselines.",
"The constraints indeed improve model interpretability, yielding a substantially increased structuring of the embedding space.",
"Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek.",
"2014"
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Our Approach",
"A Basic Embedding Model",
"Non-negativity of Entity Representations",
"Approximate Entailment for Relations",
"The Overall Model",
"Experiments and Results",
"Datasets",
"Link Prediction",
"Analysis on Entity Representations",
"Analysis on Relation Representations",
"Conclusion"
]
} | GEM-SciDuet-train-121#paper-1331#slide-17 | Analysis on relation representations | Visualization of relation representations
Encode logical regularities quite well | Visualization of relation representations
Encode logical regularities quite well | [] |
GEM-SciDuet-train-122#paper-1332#slide-0 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-0 | Adpositions are Pervasive | Adpositions: prepositions or postpositions
Order of Adposition and Noun Phrase WALS / Dryer and Haspelmath | Adpositions: prepositions or postpositions
Order of Adposition and Noun Phrase WALS / Dryer and Haspelmath | [] |
GEM-SciDuet-train-122#paper-1332#slide-1 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-1 | Prepositions are some of the most frequent | Based on the COCA list of 5000 most frequent words | Based on the COCA list of 5000 most frequent words | [] |
GEM-SciDuet-train-122#paper-1332#slide-2 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-2 | We know Prepositions are challenging for Syntactic Parsing | a talk at the conference on prepositions
But what about the meaning beyond linking governor and object? | a talk at the conference on prepositions
But what about the meaning beyond linking governor and object? | [] |
GEM-SciDuet-train-122#paper-1332#slide-3 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-3 | Prepositions are highly Polysemous | in love, in trouble
in fact leave for Paris ate for hours a gift for mother raise money for the party | in love, in trouble
in fact leave for Paris ate for hours a gift for mother raise money for the party | [] |
GEM-SciDuet-train-122#paper-1332#slide-4 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-4 | Translations are Many to Many | raise money for the church a gift for mother
give the gift to mother
go to Paris raise money to buy a house | raise money for the church a gift for mother
give the gift to mother
go to Paris raise money to buy a house | [] |
GEM-SciDuet-train-122#paper-1332#slide-5 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-5 | Potential Applications | MT into English: mistranslation of prepositions among most common errors
Semantic Parsing / SRL | MT into English: mistranslation of prepositions among most common errors
Semantic Parsing / SRL | [] |
GEM-SciDuet-train-122#paper-1332#slide-6 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-6 | Goal Disambiguation | Descriptive theory (annotation scheme) | Descriptive theory (annotation scheme) | [] |
GEM-SciDuet-train-122#paper-1332#slide-7 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-7 | Our Approach | Comprehensive with respect to naturally occurring text
Unified scheme for prepositions and possessives
Scene role and prepositions lexical contribution are distinguished
In this paper: English | Comprehensive with respect to naturally occurring text
Unified scheme for prepositions and possessives
Scene role and prepositions lexical contribution are distinguished
In this paper: English | [] |
GEM-SciDuet-train-122#paper-1332#slide-9 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-9 | Challenges for Comprehensiveness | What counts as a preposition/possessive marker?
Prepositional multi-word expressions (of course)
Phrasal verbs (give up)
Rare senses (RateUnit, 40 miles per Gallon)
Rare prepositions (in keeping with) | What counts as a preposition/possessive marker?
Prepositional multi-word expressions (of course)
Phrasal verbs (give up)
Rare senses (RateUnit, 40 miles per Gallon)
Rare prepositions (in keeping with) | [] |
GEM-SciDuet-train-122#paper-1332#slide-10 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-10 | Supersense Inventory | Semantic Network of Adposition and Case Supersenses (SNACS)
50 supersenses, 4 levels of depth
Simpler than its predecessor (Schneider et al., 2016)
Fewer categories, smaller hierarchy
Usually core semantic roles
Usually non-core semantic roles | Semantic Network of Adposition and Case Supersenses (SNACS)
50 supersenses, 4 levels of depth
Simpler than its predecessor (Schneider et al., 2016)
Fewer categories, smaller hierarchy
Usually core semantic roles
Usually non-core semantic roles | [] |
GEM-SciDuet-train-122#paper-1332#slide-11 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-11 | Construal | Challenge: the preposition itself and the verb may suggest different labels
Similar meanings: the same label?
1. Vernon works at Grunnings
at Grunnings: Locus or OrgRole
2. Vernon works for Grunnings for Grunning: Beneficiary or
Approach: distinguish scene role and preposition function
Scene role and preposition function may diverge:
Function = Scene Role in 1/3 of instances | Challenge: the preposition itself and the verb may suggest different labels
Similar meanings: the same label?
1. Vernon works at Grunnings
at Grunnings: Locus or OrgRole
2. Vernon works for Grunnings for Grunning: Beneficiary or
Approach: distinguish scene role and preposition function
Scene role and preposition function may diverge:
Function = Scene Role in 1/3 of instances | [] |
GEM-SciDuet-train-122#paper-1332#slide-12 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-12 | Documentation | A web-app and repository of prepositions/supersenses
Standardized format and querying tools to retrieve relevant examples/guidelines | A web-app and repository of prepositions/supersenses
Standardized format and querying tools to retrieve relevant examples/guidelines | [] |
GEM-SciDuet-train-122#paper-1332#slide-13 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-13 | Re annotated Dataset | STREUSLE is a corpus annotated with (preposition) supersenses
Text: review section of the English Web Treebank
Complete revision of STREUSLE: version 4.0
5,455 target prepositions, including 1,104 possessives | STREUSLE is a corpus annotated with (preposition) supersenses
Text: review section of the English Web Treebank
Complete revision of STREUSLE: version 4.0
5,455 target prepositions, including 1,104 possessives | [] |
GEM-SciDuet-train-122#paper-1332#slide-14 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-14 | Preposition Distribution | 10 account for 2/3 of the mass
regardless of abou in time in the process of it fot
under circumstances according to a least
out of date on the cheap ahead of time across
over the years in time of need just about below all over between home without than our to | 10 account for 2/3 of the mass
regardless of abou in time in the process of it fot
under circumstances according to a least
out of date on the cheap ahead of time across
over the years in time of need just about below all over between home without than our to | [] |
GEM-SciDuet-train-122#paper-1332#slide-15 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-15 | Supersense Distribution | Path Cost Extent Co-Agent Experiencer Stimulus
Topic Time Gestalt Locus | Path Cost Extent Co-Agent Experiencer Stimulus
Topic Time Gestalt Locus | [] |
GEM-SciDuet-train-122#paper-1332#slide-16 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-16 | Inter Annotator Agreement | Annotated a small sample of The Little Prince
5 annotators, varied familiarity with scheme
Exact agreement (pairwise avg.): | Annotated a small sample of The Little Prince
5 annotators, varied familiarity with scheme
Exact agreement (pairwise avg.): | [] |
GEM-SciDuet-train-122#paper-1332#slide-17 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-17 | Disambiguation Models | Most Frequent (MF) baseline: most frequent label for the preposition in training
Neural: BiLSTM over sentence + multilayer perceptron per preposition
Feature-rich linear: SVM per preposition, with features based on previous work (Srikumar &
Lexicon-based features: WordNet, Roget thesaurus | Most Frequent (MF) baseline: most frequent label for the preposition in training
Neural: BiLSTM over sentence + multilayer perceptron per preposition
Feature-rich linear: SVM per preposition, with features based on previous work (Srikumar &
Lexicon-based features: WordNet, Roget thesaurus | [] |
GEM-SciDuet-train-122#paper-1332#slide-18 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-18 | Target Identification | Multi-word prepositions, especially rare ones (e.g., after the fashion of)
Idiomatic PPs (e.g., in action, by far) | Multi-word prepositions, especially rare ones (e.g., after the fashion of)
Idiomatic PPs (e.g., in action, by far) | [] |
GEM-SciDuet-train-122#paper-1332#slide-19 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-19 | Disambiguation Results | With gold standard syntax target identification:
Role Acc Fxn Acc Full Acc
Most Frequent Neural Feature-rich linear | With gold standard syntax target identification:
Role Acc Fxn Acc Full Acc
Most Frequent Neural Feature-rich linear | [] |
GEM-SciDuet-train-122#paper-1332#slide-20 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-20 | Results Summary | Predicting function label is more difficult than role label
~8% gap in F1 score in both settings
This mirrors a similar effect in IAA, and is probably due to:
Less ambiguity in function labels (given a preposition)
The more literal nature of function labels
Syntax plays an important role
4-7% difference in performance
Neural and feature-rich approach are not far off in terms of performance
Feature-rich is marginally better
They agree on about 2/3 of cases; agreement area is 5% more accurate | Predicting function label is more difficult than role label
~8% gap in F1 score in both settings
This mirrors a similar effect in IAA, and is probably due to:
Less ambiguity in function labels (given a preposition)
The more literal nature of function labels
Syntax plays an important role
4-7% difference in performance
Neural and feature-rich approach are not far off in terms of performance
Feature-rich is marginally better
They agree on about 2/3 of cases; agreement area is 5% more accurate | [] |
GEM-SciDuet-train-122#paper-1332#slide-21 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-21 | Multi Lingual Perspective | Work is underway in Chinese, Korean, Hebrew and German
Parallel Text: The Little Prince
Complex interaction with morphology (e.g., via case)
How do prepositions change in translation?
How do role/function labels change in translation? | Work is underway in Chinese, Korean, Hebrew and German
Parallel Text: The Little Prince
Complex interaction with morphology (e.g., via case)
How do prepositions change in translation?
How do role/function labels change in translation? | [] |
GEM-SciDuet-train-122#paper-1332#slide-22 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-22 | Conclusion | A new approach to comprehensive analysis of the semantics of prepositions and possessives in English
Simpler and more concise than previous version
Encouraging initial disambiguation results | A new approach to comprehensive analysis of the semantics of prepositions and possessives in English
Simpler and more concise than previous version
Encouraging initial disambiguation results | [] |
GEM-SciDuet-train-122#paper-1332#slide-23 | 1332 | Comprehensive Supersense Disambiguation of English Prepositions and Possessives | Semantic relations are often signaled with prepositional or possessive marking-but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker's lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232
],
"paper_content_text": [
"Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances.",
"Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions.",
"Though function words bear little semantic content, they are nevertheless crucial to the meaning.",
"Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children * nathan.schneider@georgetown.edu (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 .",
"(2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews .",
"(3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old .",
"vs. Grandma cooked the children for dinner).",
"Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition-I rode the bus for 5 dollars/minutes-and the governor of the prepositional phrase (PP): I Ubered/asked for $5.",
"Possessives are similarly ambiguous: Whistler's mother/painting/hat/death.",
"Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge ( §2).",
"This work represents a new attempt to strike that balance.",
"Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage.",
"Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car's hood or its hood), we analyze them using the same inventory of semantic labels.",
"1 Our contributions include: • a new hierarchical inventory (\"SNACS\") of 50 supersense classes, extensively documented in guidelines for English ( §3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated ( §4; example sentences appear in figure 1); • an interannotator agreement study that shows the scheme is reliable and generalizes across genres-and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP's semantic role ( §5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task ( §6).",
"Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010) .",
"Possessive constructions can likewise denote a number of semantic relations, and various factors-including semantics-influence whether attributive possession in English will be expressed with of, or with 's and possessive pronouns (the 'genitive alternation '; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015) .",
"Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives 2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010 Hovy et al., , 2011 Tratz and Hovy, 2013) , and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O'Hara and Wiebe, 2009; Roth, 2011, 2013; Schneider et al., 2015 Schneider et al., , 2016 , see also Müller et al., 2012 for German).",
"The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions.",
"The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general-2 Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003 (Pustejovsky et al., , 2012 .",
"ize more easily to new types and usages.",
"The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015 (Schneider et al., , 2016 .",
"It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017) .",
"The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE 3 (Schneider et al., 2016) .",
"However, several limitations of our approach became clear to us over time.",
"First, as pointed out by , the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself.",
"suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically.",
"We address that gap here.",
"Second, 75 categories is an unwieldy number for both annotators and disambiguation systems.",
"Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning.",
"In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016) .",
"remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems.",
"We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions.",
"Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives.",
"Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0.",
"We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens.",
"3 Annotation Scheme Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap-ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions.",
"The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of 'preposition' that would encompass these other categories.",
"As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance.",
"Another consideration is developing annotation guidelines that can be adapted for other languages.",
"This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.",
"4 English possessive marking (via 's or possessive pronouns like my) is more generally an example of case marking.",
"Note that prepositions (4a-4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador's wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations.",
"This motivates a common semantic inventory for adpositions and case.",
"We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air 5 ), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large).",
"Our annotation guidelines give further details.",
"The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2 .",
"It is simpler than its predecessor- Schneider et al.",
"'s (2016) preposition supersense hierarchy-in both size and structural complexity.",
"To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air.",
"SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels.",
"The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively.",
"The PARTICIPANT and CIRCUM-STANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet's thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011) .",
"Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant.",
"CONFIGU-RATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole.",
"Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions.",
"The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION-THEME is located at LO-CUS; MOTION-THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION-AGENT acts on THEME, perhaps using an IN-STRUMENT; POSSESSION-POSSESSION belongs to POSSESSOR; TRANSFER-THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION-EXPERIENCER is mentally affected by STIMULUS; COGNITION-EXPERIENCER contemplates TOPIC; COMMUNI-CATION-information (TOPIC) flows from ORIG-INATOR to RECIPIENT, perhaps via an INSTRU-MENT.",
"For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POS-SESSOR, and SOCIALREL, the object of the preposition is prototypically animate.",
"Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases.",
"We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018) .",
"Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words.",
"SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos's (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning.",
"Our categories can thus be understood as refinements to REL.",
"Adopting the Construal Analysis Hwang et al.",
"(2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label.",
"One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels.",
"For instance: (5) a. Vernon works at Grunnings.",
"b. Vernon works for Grunnings.",
"The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer.",
"SNACS has the label ORGROLE for this purpose.",
"6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this hypothesis, Where does Vernon work?",
"is a perfectly good way to ask a question that could be answered by the PP.",
"In this example, then, there is overlap between locational meaning and organizationalbelonging meaning.",
"(5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer.",
"Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant.",
"Schneider et al.",
"(2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses.",
"Instead, suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role.",
"The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor 7 of the prepositional phrase dictates its relationship to the prepositional phrase.",
"The innovative claim is that, in addition to a preposition's relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b).",
"Construal is notated by ROLE;FUNCTION.",
"Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions.",
"Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/.",
".",
".",
"the door.",
"GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene.",
"This approach also allows for resolution of various se- mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996) , where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH).",
"Both role and function slots are filled by supersenses from the SNACS hierarchy.",
"Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role).",
"When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England.",
"We apply the construal analysis in SNACS annotation of our corpus to test its feasibility.",
"It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci's/ORIGINATOR;POSSESSOR sculptures.",
"Annotated Reviews Corpus We applied the SNACS annotation scheme ( §3) to prepositions and possessives in the STREUSLE corpus ( §2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012) .",
"The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016) , and we bundle the UD version 2 syntax alongside our annotations.",
"Table 1 shows the total number of tokens present and those that we annotated.",
"Altogether, 5,455 tokens were annotated for scene role and function.",
"The new hierarchy and annotation guidelines were developed by consensus.",
"The original preposition supersense annotations were placed in a spreadsheet and discussed.",
"While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus.",
"For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels.",
"Unusual or rare contexts also presented difficulties.",
"Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines.",
"Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.",
"8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g.",
"after_all) and coordinating conjunctions (as_well_as).",
"9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens).",
"Table 2 shows the most and least common labels occurring as scene role and function.",
"Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTI-CIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies.",
"While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERI-ENCER.",
"It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim-ited to either role or function.",
"This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.",
"10 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre.",
"We chose Saint-Exupéry's novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013) .",
"The genre is markedly different from online reviews-it is quite literary, and employs archaic or poetic figures of speech.",
"It is also a translation from French, contributing to the markedness of the language.",
"This text is therefore a challenge for an annotation scheme based on colloquial contemporary English.",
"We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre.",
"For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically ( §6.2).",
"The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times.",
"Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit.",
"For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators.",
"Annotators.",
"Five annotators (A, B, C, D, E), all authors of this paper, took part in this study.",
"All are computational linguistics researchers with advanced training in linguistics.",
"Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10 All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function.",
"Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT.",
"Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare.",
"EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal.",
"Labels Role Function Table 3 : Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (\"Exact\" means no coarsening).",
"\"Labels\" refers to the number of distinct labels that annotators could have provided at that level of coarsening.",
"Excludes tokens where at least one annotator assigned a nonsemantic label.",
"involved in developing the guidelines and learning the scheme solely from reading the manual.",
"Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers.",
"Results.",
"In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators.",
"APPROXIMATOR, CO-THEME, COST, INSTEADOF, INTERVAL, RATEU-NIT, and SPECIES were not used by any annotator.",
"To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators.",
"Despite varying exposure to the scheme, there is no obvious relationship between annotators' backgrounds and their agreement rates.",
"11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.",
"Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).",
"12 All annotators agree on the role for 119, and on the function for 139 tokens.",
"Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.",
"This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic ( §3.3) .",
"The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 2-4 in table 3; see also confusion matrix in supplement).",
"Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories.",
"Results show that most confusions are local with respect to the hierarchy.",
"Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps.",
"Target identification heuristics ( §6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense.",
"A supervised classifier then predicts a supersense analysis for each identified target.",
"The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing.",
"Experimental Setup Our experiments use the reviews corpus described in §4.",
"We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1.",
"All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters.",
"Gold tokenization was used throughout.",
"Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation-i.e., tokens with special labels (see §4) were excluded.",
"To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014) .",
"13 Named entity tags from the default 12-class CoreNLP model were used in all conditions.",
"6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13 The CoreNLP parser was trained on all 5 genres of the English Web Treebank-i.e., a superset of our training set.",
"Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP.",
"well as possessive case markers and multiword expressions.",
"61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes.",
"Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE).",
"It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation.",
"To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets.",
"The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of ) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town).",
"Both lists were constructed from the training data.",
"From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals.",
"Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash-to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride).",
"Classification The next step of disambiguation is predicting the role and function labels.",
"We explore two different modeling strategies.",
"Feature-rich Model.",
"Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013) , which were themselves extended from the preposition sense disambiguation features of Hovy et al.",
"(2010) .",
"We briefly describe the feature set here, and refer the reader to the original work for further details.",
"At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun).",
"In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture target-specific interactions with the labels.",
"The features extracted from each neighboring word are listed in the supplementary material.",
"Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008) .",
"Neural Model.",
"Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiL-STM.",
"For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.",
"14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations.",
"For each identified target unit u, we extract its first token, and its governor and object headword.",
"For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label.",
"We additionally concatenate embeddings of u's lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized.",
"All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010) .",
"A NONE label is used when the corresponding feature is not given, both in training and at test time.",
"The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels.",
"We tuned hyperparameters on the development set to maximize F-score (see supplementary material).",
"We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20.",
"Inverted dropout was used during training.",
"The model is implemented with the DyNet library (Neubig et al., 2017) .",
"The model architecture is largely comparable to that of Gonen and Goldberg (2016) , who experimented with a coarsened version of STREUSLE 3.0.",
"The main difference is their use of unlabeled multilingual datasets to improve pre-14 Word2vec is pre-trained on the Google News corpus.",
"Zero vectors are used where vectors are not available.",
"diction by exploiting the differences in preposition ambiguities across languages.",
"Results & Analysis Following the two-stage disambiguation pipeline (i.e.",
"target identification and classification), we separate the evaluation across the phases.",
"Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics.",
"Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right).",
"We evaluate each classifier along three dimensions-role and function independently, and full (i.e.",
"both role and function together).",
"When we have the gold targets, we only report accuracy because precision and recall are equal.",
"With automatically identified targets, we report P/R/F for each dimension.",
"Both tables show the impact of syntactic parsing on quality.",
"The rest of this section presents analyses of the results along various axes.",
"Target identification.",
"The identification heuristics described in §6.2 achieve an F 1 score of 89.2% on the test set using gold syntax.",
"15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression.",
"9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives.",
"Automatically generated parse trees slightly decrease quality (table 4) .",
"Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores.",
"We observe this degradation when we compare the Gold ID and the Auto ID blocks of for the most frequent baseline, which selects the most frequent role-function label pair given the (gold) lemma according to the training data.",
"Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction.",
"The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies.",
"Function and scene role performance.",
"Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems.",
"This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline's higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine.",
"Impact of automatic syntax.",
"Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition's object and governor.",
"In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores.",
"Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance.",
"As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth.",
"Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels.",
"Depth-4 (Exact) represents the full results in table 5.",
"These results show that the classifiers often mistake a label for another that is nearby in the hierarchy.",
"Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Table 6 : Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output.",
"\"Labels\" refers to the number of labels in the training set after coarsening.",
"(which makes sense as it is most frequent overall), and SOCIALROLE-ORGROLE and GESTALT-POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other).",
"Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus.",
"We found good interannotator agreement and provided initial supervised disambiguation results.",
"We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models.",
"Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4",
"5",
"6",
"6.1",
"6.3",
"6.4",
"6.5",
"7"
],
"paper_header_content": [
"Introduction",
"Background: Disambiguation of Prepositions and Possessives",
"Lexical Categories of Interest",
"The SNACS Hierarchy",
"Adopting the Construal Analysis",
"Annotated Reviews Corpus",
"Interannotator Agreement Study",
"Disambiguation Systems",
"Experimental Setup",
"Classification",
"Results & Analysis",
"Errors & Confusions",
"Conclusion"
]
} | GEM-SciDuet-train-122#paper-1332#slide-23 | Ongoing Work | Multi-lingual extensions to four languages
Streamlining the documentation and annotation processes
Semi-supervised and multi-lingual disambiguation systems
Integrating the scheme with a structural scheme (UCCA) | Multi-lingual extensions to four languages
Streamlining the documentation and annotation processes
Semi-supervised and multi-lingual disambiguation systems
Integrating the scheme with a structural scheme (UCCA) | [] |
GEM-SciDuet-train-123#paper-1336#slide-0 | 1336 | A Multi-Axis Annotation Scheme for Event Temporal Relations | Existing temporal relation (TempRel) annotation schemes often have low interannotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition. This paper proposes a new multi-axis modeling to better capture the temporal structure of events. In addition, we identify that event end-points are a major source of confusion in annotation, so we also propose to annotate TempRels based on start-points only. A pilot expert annotation effort using the proposed scheme shows significant improvement in IAA from the conventional 60's to 80's (Cohen's Kappa). This better-defined annotation scheme further enables the use of crowdsourcing to alleviate the labor intensity for each annotator. We hope that this work can foster more interesting studies towards event understanding. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259
],
"paper_content_text": [
"Introduction Temporal relation (TempRel) extraction is an important task for event understanding, and it has drawn much attention in the natural language processing (NLP) community recently (UzZaman et al., 2013; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015 Bethard et al., , 2016 Bethard et al., , 2017 Leeuwenberg and Moens, 2017; Ning et al., 2017 Ning et al., , 2018a .",
"Initiated by TimeBank (TB) (Pustejovsky et al., 2003b) , a number of TempRel datasets have been collected, including but not limited to the verbclause augmentation to TB (Bethard et al., 2007) , TempEval1-3 (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , TimeBank-Dense (TB-Dense) , EventTimeCorpus (Reimers et al., 2016) , and datasets with both temporal and other types of relations (e.g., coreference and causality) such as CaTeRs (Mostafazadeh et al., 2016) and RED (O'Gorman et al., 2016) .",
"These datasets were annotated by experts, but most still suffered from low inter-annotator agreements (IAA).",
"For instance, the IAAs of TB-Dense, RED and THYME-TimeML (Styler IV et al., 2014) were only below or near 60% (given that events are already annotated).",
"Since a low IAA usually indicates that the task is difficult even for humans (see Examples 1-3), the community has been looking into ways to simplify the task, by reducing the label set, and by breaking up the overall, complex task into subtasks (e.g., getting agreement on which event pairs should have a relation, and then what that relation should be) (Mostafazadeh et al., 2016; O'Gorman et al., 2016) .",
"In contrast to other existing datasets, Bethard et al.",
"(2007) achieved an agreement as high as 90%, but the scope of its annotation was narrowed down to a very special verb-clause structure.",
"(e1, e2), (e3, e4), and (e5, e6): TempRels that are difficult even for humans.",
"Note that only relevant events are highlighted here.",
"Example 1: Serbian police tried to eliminate the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Example 2: Service industries (e3:showed) solid job gains, as did manufacturers, two areas expected to be hardest (e4:hit) when the effects of the Asian crisis hit the American economy.",
"Example 3: We will act again if we have evidence he is (e5:rebuilding) his weapons of mass destruction capabilities, senior officials say.",
"In a bit of television diplomacy, Iraq's deputy foreign minister (e6:responded) from Baghdad in less than one hour, saying that .",
".",
".",
"This paper proposes a new approach to handling these issues in TempRel annotation.",
"First, we introduce multi-axis modeling to represent the temporal structure of events, based on which we anchor events to different semantic axes; only events from the same axis will then be temporally compared (Sec.",
"2).",
"As explained later, those event pairs in Examples 1-3 are difficult because they represent different semantic phenomena and belong to different axes.",
"Second, while we represent an event pair using two time intervals (say, [t 1 start , t 1 end ] and [t 2 start , t 2 end ]), we suggest that comparisons involving end-points (e.g., t 1 end vs. t 2 end ) are typically more difficult than comparing start-points (i.e., t 1 start vs. t 2 start ); we attribute this to the ambiguity of expressing and perceiving durations of events (Coll-Florit and Gennari, 2011) .",
"We believe that this is an important consideration, and we propose in Sec.",
"3 that TempRel annotation should focus on start-points.",
"Using the proposed annotation scheme, a pilot study done by experts achieved a high IAA of .84 (Cohen's Kappa) on a subset of TB-Dense, in contrast to the conventional 60's.",
"In addition to the low IAA issue, TempRel annotation is also known to be labor intensive.",
"Our third contribution is that we facilitate, for the first time, the use of crowdsourcing to collect a new, high quality (under multiple metrics explained later) TempRel dataset.",
"We explain how the crowdsourcing quality was controlled and how vague relations were handled in Sec.",
"4, and present some statistics and the quality of the new dataset in Sec.",
"5.",
"A baseline system is also shown to achieve much better performance on the new dataset, when compared with system performance in the literature (Sec.",
"6).",
"The paper's results are very encouraging and hopefully, this work would significantly benefit research in this area.",
"Temporal Structure of Events Given a set of events, one important question in designing the TempRel annotation task is: which pairs of events should have a relation?",
"The answer to it depends on the modeling of the overall temporal structure of events.",
"Motivation TimeBank (Pustejovsky et al., 2003b) laid the foundation for many later TempRel corpora, e.g., (Bethard et al., 2007; UzZaman et al., 2013; Cas-sidy et al., 2014) .",
"2 In TimeBank, the annotators were allowed to label TempRels between any pairs of events.",
"This setup models the overall structure of events using a general graph, which made annotators inadvertently overlook some pairs, resulting in low IAAs and many false negatives.",
"Example 4: Dense Annotation Scheme.",
"Serbian police (e7:tried) to (e8:eliminate) the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Given 4 NON-GENERIC events above, the dense scheme presents 6 pairs to annotators one by one: (e7, e8), (e7, e1), (e7, e2), (e8, e1), (e8, e2), and (e1, e2).",
"Apparently, not all pairs are well-defined, e.g., (e8, e2) and (e1, e2), but annotators are forced to label all of them.",
"To address this issue, proposed a dense annotation scheme, TB-Dense, which annotates all event pairs within a sliding, two-sentence window (see Example 4).",
"It requires all TempRels between GENERIC 3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NON-GENERIC and the other one for the GENERIC.",
"However, as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts.",
"It is reasonable to argue that the attempt to e1:restore order happened before the conflict where 51 people were e2:killed; or, 51 people had been killed but order had not been restored yet, so e1:restore is after e2:killed.",
"Similarly, in Example 2, service industries and manufacturers were originally expected to be hardest e4:hit but actually e3:showed gains, so e4:hit is before e3:showed; however, one can also argue that the two areas had showed gains but had not been hit, so e4:hit is after e3:showed.",
"Again, e5:rebuilding is a hypothetical event: \"we will act if rebuilding is true\".",
"Readers do not know for sure if \"he is already rebuilding weapons but we have no evidence\", or \"he will be building weapons in the future\", so annotators may disagree on the relation between e5:rebuilding and e6:responded.",
"Despite, importantly, minimizing missing annota-2 EventTimeCorpus (Reimers et al., 2016) is based on TimeBank, but aims at anchoring events onto explicit time expressions in each document rather than annotating TempRels between events, which can be a good complementary to other TempRel datasets.",
"3 For example, lions eat meat is GENERIC.",
"tions, the current dense scheme forces annotators to label many such ill-defined pairs, resulting in low IAA.",
"Multi-Axis Modeling Arguably, an ideal annotator may figure out the above ambiguity by him/herself and mark them as vague, but it is not a feasible requirement for all annotators to stay clear-headed for hours; let alone crowdsourcers.",
"What makes things worse is that, after annotators spend a long time figuring out these difficult cases, whether they disagree with each other or agree on the vagueness, the final decisions for such cases will still be vague.",
"As another way to handle this dilemma, TB-Dense resorted to a 80% confidence rule: annotators were allowed to choose a label if one is 80% sure that it was the writer's intent.",
"However, as pointed out by TB-Dense, annotators are likely to have rather different understandings of 80% confidence and it will still end up with disagreements.",
"In contrast to these annotation difficulties, humans can easily grasp the meaning of news articles, implying a potential gap between the difficulty of the annotation task and the one of understanding the actual meaning of the text.",
"In Examples 1-3, the writers did not intend to explain the TempRels between those pairs, and the original annotators of TimeBank 4 did not label relations between those pairs either, which indicates that both writers and readers did not think the TempRels between these pairs were crucial.",
"Instead, what is crucial in these examples is that \"Serbian police tried to restore order but killed 51 people\", that \"two areas were expected to be hit but showed gains\", and that \"if he rebuilds weapons then we will act.\"",
"To \"restore order\", to be \"hardest hit\", and \"if he was rebuilding\" were only the intention of police, the opinion of economists, and the condition to act, respectively, and whether or not they actually happen is not the focus of those writers.",
"This discussion suggests that a single axis is too restrictive to represent the complex structure of NON-GENERIC events.",
"Instead, we need a modeling which is more restrictive than a general graph so that annotators can focus on relation annotation (rather than looking for pairs first), but also more flexible than a single axis so that ill-defined 4 Recall that they were given the entire article and only salient relations would be annotated.",
"relations are not forcibly annotated.",
"Specifically, we need axes for intentions, opinions, hypotheses, etc.",
"in addition to the main axis of an article.",
"We thus argue for multi-axis modeling, as defined in Table 1 .",
"Following the proposed modeling, Examples 1-3 can be represented as in Fig.",
"1 .",
"This modeling aims at capturing what the author has explicitly expressed and it only asks annotators to look at comparable pairs, rather than forcing them to make decisions on often vaguely defined pairs.",
"In practice, we annotate one axis at a time: we first classify if an event is anchorable onto a given axis (this is also called the anchorability annotation step); then we annotate every pair of anchorable events (i.e., the relation annotation step); finally, we can move to another axis and repeat the two steps above.",
"Note that ruling out cross-axis relations is only a strategy we adopt in this paper to separate well-defined relations from ill-defined relations.",
"We do not claim that cross-axis relations are unimportant; instead, as shown in Fig.",
"2 , we think that cross-axis relations are a different semantic phenomenon that requires additional investigation.",
"Comparisons with Existing Work There have been other proposals of temporal structure modelings (Bramsen et al., 2006; Bethard et al., 2012) , but in general, the semantic phenomena handled in our work are very different and complementary to them.",
"(Bramsen et al., 2006) introduces \"temporal segments\" (a fragment of text that does not exhibit abrupt changes) in the medical domain.",
"Similarly, their temporal segments can also be considered as a special temporal structure modeling.",
"But a key difference is that (Bramsen et al., 2006) only annotates inter-segment relations, ignoring intra-segment ones.",
"Since those segments are usually large chunks of text, the semantics handled in (Bramsen et al., 2006) is in a very coarse granularity (as pointed out by (Bramsen et al., 2006) ) and is thus different from ours.",
"(Bethard et al., 2012) proposes a tree structure for children's stories, which \"typically have simpler temporal structures\", as they pointed out.",
"Moreover, in their annotation, an event can only be linked to a single nearby event, even if multiple nearby events may exist, whereas we do not have such restrictions.",
"In addition, some of the semantic phenomena in Table 1 have been discussed in existing work.",
"Here we compare with them for a better positioning of the proposed scheme.",
"Axis Projection TB-Dense handled the incomparability between main-axis events and HYPOTHESIS/NEGATION by treating an event as having occurred if the event is HYPOTHESIS/NEGATION.",
"5 In our multiaxis modeling, the strategy adopted by TB-Dense falls into a more general approach, \"axis projection\".",
"That is, projecting events across different axes to handle the incomparability between any two axes (not limited to HYPOTHE-SIS/NEGATION).",
"Axis projection works well for certain event pairs like Asian crisis and e4:hardest hit in Example 2: as in Fig.",
"1 , Asian crisis is before expected, which is again before e4:hardest hit, so Asian crisis is before e4:hardest hit.",
"Generally, however, since there is no direct evidence that can guide the projection, annotators may have different projections (imagine projecting e5:rebuilding onto the main axis: is it in the past or in the future?).",
"As a result, axis projec-tion requires many specially designed guidelines or strong external knowledge.",
"Annotators have to rigidly follow the sometimes counter-intuitive guidelines or \"guess\" a label instead of looking for evidence in the text.",
"When strong external knowledge is involved in axis projection, it becomes a reasoning process and the resulting relations are a different type.",
"For example, a reader may reason that in Example 3, it is well-known that they did \"act again\", implying his e5:rebuilding had happened and is before e6:responded.",
"Another example is in Fig.",
"2 .",
"It is obvious that relations based on these projections are not the same with and more challenging than those same-axis relations, so in the current stage, we should focus on same-axis relations only.",
"tended the conference, the projection of submit a paper onto the main axis is clearly before attended.",
"However, this projection requires strong external knowledge that a paper should be submitted before attending a conference.",
"Again, this projection is only a guess based on our external knowledge and it is still open whether the paper is submitted or not.",
"Introduction of the Orthogonal Axes Another prominent difference to earlier work is the introduction of orthogonal axes, which has not been used in any existing work as we know.",
"A special property is that the intersection event of two axes can be compared to events from both, which can sometimes bridge events, e.g., in Fig.",
"1 , Asian crisis is seemingly before hardest hit due to their connections to expected.",
"Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well.",
"However, the \"hardest hit\" in \"Asian crisis before hardest hit\" is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true.",
"Nevertheless, OPINIONS are not always true and INTENTIONS are not always fulfilled.",
"In Example 5, e9:sponsoring and e10:resolve are the opinions of the West and the speaker, respectively; whether or not they are true depends on the au-thors' implications or the readers' understandings, which is often beyond the scope of TempRel annotation.",
"6 Example 6 demonstrates a similar situation for INTENTIONS: when reading the sentence of e11:report, people are inclined to believe that it is fulfilled.",
"But if we read the sentence of e12:report, we have reason to believe that it is not.",
"When it comes to e13:tell, it is unclear if everyone told the truth.",
"The existence of such examples indicates that orthogonal axes are a better modeling for INTENTIONS and OPINIONS.",
"Example 5: Opinion events may not always be true.",
"He is ostracized by the West for (e9:sponsoring) terrorism.",
"We need to (e10:resolve) the deep-seated causes that have resulted in these problems.",
"Example 6: Intentions may not always be fulfilled.",
"A passerby called the police to (e11:report) the body.",
"A passerby called the police to (e12:report) the body.",
"Unfortunately, the line was busy.",
"I asked everyone to (e13:tell) the truth.",
"Differences from Factuality Event modality have been discussed in many existing event annotation schemes, e.g., Event Nugget (Mitamura et al., 2015) , Rich ERE , and RED.",
"Generally, an event is classified as Actual or Non-Actual, a.k.a.",
"factuality (Saurí and Pustejovsky, 2009; Lee et al., 2015) .",
"The main-axis events defined in this paper seem to be very similar to Actual events, but with several important differences: First, future events are Non-Actual because they indeed have not happened, but they may be on the main axis.",
"Second, events that are not on the main axis can also be Actual events, e.g., intentions that are fulfilled, or opinions that are true.",
"Third, as demonstrated by Examples 5-6, identifying anchorability as defined in Table 1 is relatively easy, but judging if an event actually happened is often a high-level understanding task that requires an understanding of the entire document or external knowledge.",
"Interested readers are referred to Appendix B for a detailed analysis of the difference between Anchorable (onto the main axis) and Actual on a subset of RED.",
"Interval Splitting All existing annotation schemes adopt the interval representation of events (Allen, 1984) and there are 13 relations between two intervals (for readers who are not familiar with it, please see Fig.",
"4 in the appendix).",
"To reduce the burden of annotators, existing schemes often resort to a reduced set of the 13 relations.",
"For instance, Verhagen et al.",
"(2007) Let [t 1 start , t 1 end ] and [t 2 start , t 2 end ] be the time intervals of two events (with the implicit assumption that t start t end ).",
"Instead of reducing the relations between two intervals, we try to explicitly compare the time points (see Fig.",
"3 ).",
"In this way, the label set is simply before, after and equal, 7 while the expressivity remains the same.",
"This interval splitting technique has also been used in (Raghavan et al., 2012) .",
"Figure 3 : The comparison of two event time intervals, [t 1 start , t 1 end ] and [t 2 start , t 2 end ], can be decomposed into four comparisons t 1 start vs. t 2 start , t 1 start vs. t 2 end , t 1 end vs. t 2 start , and t 1 end vs. t 2 end , without loss of generality.",
"[!",
"\"#$%# & , !",
"()* & ] [!",
"\"#$%# + , !",
"()* + ] time In addition to same expressivity, interval splitting can provide even more information when the relation between two events is vague.",
"In the conventional setting, imagine that the annotators find that the relation between two events can be either before or before and overlap.",
"Then the resulting annotation will have to be vague, although the annotators actually agree on the relation between t 1 start and t 2 start .",
"Using interval splitting, however, such information can be preserved.",
"An obvious downside of interval splitting is the increased number of annotations needed (4 point comparisons vs. 1 interval comparison).",
"In practice, however, it is usually much fewer than 4 comparisons.",
"For example, when we see t 1 end < t 2 start (as in Fig.",
"3) , the other three can be skipped because they can all be inferred.",
"Moreover, although the number of annotations is increased, the work load for human annotators may still be the same, because even in the conventional scheme, they still need to think of the relations between start-and end-points before they can make a decision.",
"Ambiguity of End-Points During our pilot annotation, the annotation quality dropped significantly when the annotators needed to reason about relations involving end-points of events.",
"Table 2 shows four metrics of task difficulty when only t 1 start vs. t 2 start or t 1 end vs. t 2 end are annotated.",
"Non-anchorable events were removed for both jobs.",
"The first two metrics, qualifying pass rate and survival rate are related to the two quality control protocols (see Sec.",
"4.1 for details).",
"We can see that when annotating the relations between end-points, only one out of ten crowdsourcers (11%) could successfully pass our qualifying test; and even if they had passed it, half of them (56%) would have been kicked out in the middle of the task.",
"The third line is the overall accuracy on gold set from all crowdsourcers (excluding those who did not pass the qualifying test), which drops from 67% to 37% when annotating end-end relations.",
"The last line is the average response time per annotation and we can see that it takes much longer to label an end-end TempRel (52s) than a start-start TempRel (33s).",
"This important discovery indicates that the TempRels between end-points is probably governed by a different linguistic phenomenon.",
"Table 2 : Annotations involving the end-points of events are found to be much harder than only comparing the start-points.",
"We hypothesize that the difficulty is a mixture of how durative events are expressed (by authors) and perceived (by readers) in natural language.",
"In cognitive psychology, Coll-Florit and Gennari (2011) discovered that human readers take longer to perceive durative events than punctual events, e.g., owe 50 bucks vs. lost 50 bucks.",
"From the writer's standpoint, durations are usually fuzzy (Schockaert and De Cock, 2008) , or assumed to be a prior knowledge of readers (e.g., college takes 4 years and watching an NBA game takes a few hours), and thus not always written explicitly.",
"Given all these reasons, we ignore the comparison of end-points in this work, although event duration is indeed, another important task.",
"Annotation Scheme Design To summarize, with the proposed multi-axis modeling (Sec.",
"2) and interval splitting (Sec.",
"3), our annotation scheme is two-step.",
"First, we mark every event candidate as being temporally Anchorable or not (based on the time axis we are working on).",
"Second, we adopt the dense annotation scheme to label TempRels only between Anchorable events.",
"Note that we only work on verb events in this paper, so non-verb event candidates are also deleted in a preprocessing step.",
"We design crowdsourcing tasks for both steps and as we show later, high crowdsourcing quality was achieved on both tasks.",
"In this section, we will discuss some practical issues.",
"Quality Control for Crowdsourcing We take advantage of the quality control feature in CrowdFlower in our crowdsourcing jobs.",
"For any job, a set of examples are annotated by experts beforehand, which is considered gold and will serve two purposes.",
"(i) Qualifying test: Any crowdsourcer who wants to work on this job has to pass with 70% accuracy on 10 questions randomly selected from the gold set.",
"(ii) Surviving test: During the annotation process, questions from the gold set will be randomly given to crowdsourcers without notice, and one has to maintain 70% accuracy on the gold set till the end of the annotation; otherwise, he or she will be forbidden from working on this job anymore and all his/her annotations will be discarded.",
"At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision.",
"Vague Relations How to handle vague relations is another issue in temporal annotation.",
"In non-dense schemes, annotators usually skip the annotation of a vague pair.",
"In dense schemes, a majority agreement rule is applied as a postprocessing step to back off a decision to vague when annotators cannot pass a majority vote , which reminds us that annotators often label a vague relation as non-vague due to lack of thinking.",
"We decide to proactively reduce the possibility of such situations.",
"As mentioned earlier, our label set for t 1 start vs. t 2 start is before, after, equal and vague.",
"We ask two questions: Q1=Is it possible that t 1 start is before t 2 start ?",
"Q2=Is it possible that t 2 start is before t 1 start ?",
"Let the an-swers be A1 and A2.",
"Then we have a oneto-one mapping as follows: A1=A2=yes7 !vague, A1=A2=no7 !equal, A1=yes, A2=no7 !before, and A1=no, A2=yes7 !after.",
"An advantage is that one will be prompted to think about all possibilities, thus reducing the chance of overlook.",
"Finally, the annotation interface we used is shown in Appendix C. Corpus Statistics and Quality In this section, we first focus on annotations on the main axis, which is usually the primary storyline and thus has most events.",
"Before launching the crowdsourcing tasks, we checked the IAA between two experts on a subset of TB-Dense (about 100 events and 400 relations).",
"A Cohen's Kappa of .85 was achieved in the first step: anchorability annotation.",
"Only those events that both experts labeled Anchorable were kept before they moved onto the second step: relation annotation, for which the Cohen's Kappa was .90 for Q1 and .87 for Q2.",
"Table 3 furthermore shows the distribution, Cohen's Kappa, and F 1 of each label.",
"We can see the Kappa and F 1 of vague (=.75, F 1 =.81) are generally lower than those of the other labels, confirming that temporal vagueness is a more difficult semantic phenomenon.",
"Nevertheless, the overall IAA shown in Table 3 is a significant improvement compared to existing datasets.",
"With the improved IAA confirmed by experts, we sequentially launched the two-step crowdsourcing tasks through CrowdFlower on top of the same 36 documents of TB-Dense.",
"To evaluate how well the crowdsourcers performed on our task, we calculate two quality metrics: accuracy on the gold set and the Worker Agreement with Aggregate (WAWA).",
"WAWA indicates the average number of crowdsourcers' responses agreed with the aggregate answer (we used majority aggregation for each question).",
"For example, if N individual responses were obtained in total, and n of them were correct when compared to the aggregate answer, then WAWA is simply n/N .",
"In the first step, crowdsourcers labeled 28% of the events as Non-Anchorable to the main axis, with an accuracy on the gold of .86 and a WAWA of .79.",
"With Non-Anchorable events filtered, the relation annotation step was launched as another crowdsourcing task.",
"The label distribution is b=.",
"50, a=.28, e=.03, and v=.19 (consistent with Table 3 ).",
"In Table 4 , we show the annotation quality of this step using accuracy on the gold set and WAWA.",
"We can see that the crowdsourcers achieved a very good performance on the gold set, indicating that they are consistent with the authors who created the gold set; these crowdsourcers also achieved a high-level agreement under the WAWA metric, indicating that they are consistent among themselves.",
"These two metrics indicate that the annotation task is now well-defined and easy to understand even by non-experts.",
"Table 4 : Quality analysis of the relation annotation step of MATRES.",
"\"Q1\" and \"Q2\" refer to the two questions crowdsourcers were asked (see Sec.",
"4.2 for details).",
"Line 1 measures the level of consistency between crowdsourcers and the authors and line 2 measures the level of consistency among the crowdsourcers themselves.",
"We continued to annotate INTENTION and OPINION which create orthogonal branches on the main axis.",
"In the first step, crowdsourcers achieved an accuracy on gold of .82 and a WAWA of .89.",
"Since only 16% of the events are in this category and these axes are usually very short (e.g., allocate funds to build a museum.",
"), the annotation task is relatively small and two experts took the second step and achieved an agreement of .86 (F 1 ).",
"We name our new dataset MATRES for Multi-Axis Temporal RElations for Start-points.",
"Each individual judgement cost us $0.01 and MATRES in total cost about $400 for 36 documents.",
"Comparison to TB-Dense To get another checkpoint of the quality of the new dataset, we compare with the annotations of TB-Dense.",
"TB-Dense has 1.1K verb events, between which 3.4K event-event (EE) relations are annotated.",
"In the new dataset, 72% of the events (0.8K) are anchored onto the main axis, resulting in 1.6K EE relations, and 16% (0.2K) are anchored onto orthogonal axes, resulting in 0.2K EE relations.",
"The following comparison is based on the 1.8K EE relations in common.",
"Moreover, since TB-Dense annotations are for intervals instead of start-points only, we converted TB-Dense's interval relations to start-point relations (e.g., if A includes B, then t A start is before t B start ).",
"The confusion matrix is shown in Table 5 .",
"A few remarks about how to understand it: First, when TB-Dense labels before or after, MATRES also has a high-probability of having the same label (b=455/513=.89, a=309/438=.71); when MATRES labels vague, TB-Dense is also very likely to label vague (v=192/312=.62).",
"This indicates the high agreement level between the two datasets if the interval-or point-based annotation difference is ruled out.",
"Second, many vague relations in TB-Dense are labeled as before, after or equal in MATRES.",
"This is expected because TB-Dense annotates relations between intervals, while MATRES annotates start-points.",
"When durative events are involved, the problem usually becomes more difficult and interval-based annotation is more likely to label vague (see earlier discussions in Sec.",
"3).",
"Example 7 shows three typical cases, where e14:became, e17:backed, e18:rose and e19:extending can be considered durative.",
"If only their start-points are considered, the crowdsourcers were correct in labeling e14 before e15, e16 after e17, and e18 equal to e19, although TB-Dense says vague for all of them.",
"Third, equal seems to be the relation that the two dataset mostly disagree on, which is probably due to crowdsourcers' lack of understanding in time granularity and event coreference.",
"Although equal relations only constitutes a small portion in all relations, it needs further investigation.",
"Baseline System We develop a baseline system for TempRel extraction on MATRES, assuming that all the events and axes are given.",
"The following commonly-Example 7: Typical cases that TB-Dense annotated vague but MATRES annotated before, after, and equal, respectively.",
"At one point , when it (e14:became) clear controllers could not contact the plane, someone (e15:said) a prayer.",
"TB-Dense: vague; MATRES: before The US is bolstering its military presence in the gulf, as President Clinton (e16:discussed) the Iraq crisis with the one ally who has (e17:backed) his threat of force, British prime minister Tony Blair.",
"TB-Dense: vague; MATRES: after Average hourly earnings of nonsupervisory employees (e18:rose) to $12.51.",
"The gain left wages 3.8 percent higher than a year earlier, (e19:extending) a trend that has given back to workers some of the earning power they lost to inflation in the last decade.",
"TB-Dense: vague; MATRES: equal used features for each event pair are used: (i) The part-of-speech (POS) tags of each individual event and of its neighboring three words.",
"(ii) The sentence and token distance between the two events.",
"(iii) The appearance of any modal verb between the two event mentions in text (i.e., will, would, can, could, may and might).",
"(iv) The appearance of any temporal connectives between the two event mentions (e.g., before, after and since).",
"(v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998) .",
"(vi) Whether the input event mentions have a common derivational form derived from WordNet.",
"(vii) The head words of the preposition phrases that cover each event, respectively.",
"And (viii) event properties such as Aspect, Modality, and Polarity that come with the TimeBank dataset and are commonly used as features.",
"The proposed baseline system uses the averaged perceptron algorithm to classify the relation between each event pair into one of the four relation types.",
"We adopted the same train/dev/test split of TB-Dense, where there are 22 documents in train, 5 in dev, and 9 in test.",
"Parameters were tuned on the train-set to maximize its F 1 on the dev-set, after which the classifier was retrained on the union of train and dev.",
"A detailed analysis of the baseline system is provided in Table 6 .",
"The performance on equal and vague is lower than on before and after, probably due to shortage in these labels in the training data and the inherent difficulty in event coreference and temporal vagueness.",
"We can see, though, that the overall performance on MATRES is much better than those in the literature for TempRel extraction, which used to be in the low 50's Ning et al., 2017 ).",
"The same system was also retrained and tested on the original annotations of TB-Dense (Line \"Original\"), which confirms the significant improvement if the proposed annotation scheme is used.",
"Note that we do not mean to say that the proposed baseline system itself is better than other existing algorithms, but rather that the proposed annotation scheme and the resulting dataset lead to better defined machine learning tasks.",
"In the future, more data can be collected and used with advanced techniques such as ILP (Do et al., 2012) , structured learning (Ning et al., 2017) or multi-sieve .",
"Table 6 : Performance of the proposed baseline system on MATRES.",
"Line \"Original\" is the same system retrained on the original TB-Dense and tested on the same subset of event pairs.",
"Due to the limited number of equal examples, the system did not make any equal predictions on the testset.",
"Conclusion This paper proposes a new scheme for TempRel annotation between events, simplifying the task by focusing on a single time axis at a time.",
"We have also identified that end-points of events is a major source of confusion during annotation due to reasons beyond the scope of TempRel annotation, and proposed to focus on start-points only and handle the end-points issue in further investigation (e.g., in event duration annotation tasks).",
"Pilot study by expert annotators shows significant IAA improvements compared to literature values, indicating a better task definition under the proposed scheme.",
"This further enables the usage of crowdsourcing to collect a new dataset, MATRES, at a lower time cost.",
"Analysis shows that MATRES, albeit crowdsourced, has achieved a reasonably good agreement level, as confirmed by its performance on the gold set (agreement with the authors), the WAWA metric (agreement with the crowdsourcers themselves), and consistency with TB-Dense (agreement with an existing dataset).",
"Given the fact that existing schemes suffer from low IAAs and lack of data, we hope that the findings in this work would provide a good start towards understanding more sophisticated semantic phenomena in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.3.1",
"2.3.2",
"2.3.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Temporal Structure of Events",
"Motivation",
"Multi-Axis Modeling",
"Comparisons with Existing Work",
"Axis Projection",
"Introduction of the Orthogonal Axes",
"Differences from Factuality",
"Interval Splitting",
"Ambiguity of End-Points",
"Annotation Scheme Design",
"Quality Control for Crowdsourcing",
"Vague Relations",
"Corpus Statistics and Quality",
"Comparison to TB-Dense",
"Baseline System",
"Conclusion"
]
} | GEM-SciDuet-train-123#paper-1336#slide-0 | Towards natural language understanding | 11. Reasoning about Time | 11. Reasoning about Time | [] |
GEM-SciDuet-train-123#paper-1336#slide-1 | 1336 | A Multi-Axis Annotation Scheme for Event Temporal Relations | Existing temporal relation (TempRel) annotation schemes often have low interannotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition. This paper proposes a new multi-axis modeling to better capture the temporal structure of events. In addition, we identify that event end-points are a major source of confusion in annotation, so we also propose to annotate TempRels based on start-points only. A pilot expert annotation effort using the proposed scheme shows significant improvement in IAA from the conventional 60's to 80's (Cohen's Kappa). This better-defined annotation scheme further enables the use of crowdsourcing to alleviate the labor intensity for each annotator. We hope that this work can foster more interesting studies towards event understanding. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259
],
"paper_content_text": [
"Introduction Temporal relation (TempRel) extraction is an important task for event understanding, and it has drawn much attention in the natural language processing (NLP) community recently (UzZaman et al., 2013; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015 Bethard et al., , 2016 Bethard et al., , 2017 Leeuwenberg and Moens, 2017; Ning et al., 2017 Ning et al., , 2018a .",
"Initiated by TimeBank (TB) (Pustejovsky et al., 2003b) , a number of TempRel datasets have been collected, including but not limited to the verbclause augmentation to TB (Bethard et al., 2007) , TempEval1-3 (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , TimeBank-Dense (TB-Dense) , EventTimeCorpus (Reimers et al., 2016) , and datasets with both temporal and other types of relations (e.g., coreference and causality) such as CaTeRs (Mostafazadeh et al., 2016) and RED (O'Gorman et al., 2016) .",
"These datasets were annotated by experts, but most still suffered from low inter-annotator agreements (IAA).",
"For instance, the IAAs of TB-Dense, RED and THYME-TimeML (Styler IV et al., 2014) were only below or near 60% (given that events are already annotated).",
"Since a low IAA usually indicates that the task is difficult even for humans (see Examples 1-3), the community has been looking into ways to simplify the task, by reducing the label set, and by breaking up the overall, complex task into subtasks (e.g., getting agreement on which event pairs should have a relation, and then what that relation should be) (Mostafazadeh et al., 2016; O'Gorman et al., 2016) .",
"In contrast to other existing datasets, Bethard et al.",
"(2007) achieved an agreement as high as 90%, but the scope of its annotation was narrowed down to a very special verb-clause structure.",
"(e1, e2), (e3, e4), and (e5, e6): TempRels that are difficult even for humans.",
"Note that only relevant events are highlighted here.",
"Example 1: Serbian police tried to eliminate the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Example 2: Service industries (e3:showed) solid job gains, as did manufacturers, two areas expected to be hardest (e4:hit) when the effects of the Asian crisis hit the American economy.",
"Example 3: We will act again if we have evidence he is (e5:rebuilding) his weapons of mass destruction capabilities, senior officials say.",
"In a bit of television diplomacy, Iraq's deputy foreign minister (e6:responded) from Baghdad in less than one hour, saying that .",
".",
".",
"This paper proposes a new approach to handling these issues in TempRel annotation.",
"First, we introduce multi-axis modeling to represent the temporal structure of events, based on which we anchor events to different semantic axes; only events from the same axis will then be temporally compared (Sec.",
"2).",
"As explained later, those event pairs in Examples 1-3 are difficult because they represent different semantic phenomena and belong to different axes.",
"Second, while we represent an event pair using two time intervals (say, [t 1 start , t 1 end ] and [t 2 start , t 2 end ]), we suggest that comparisons involving end-points (e.g., t 1 end vs. t 2 end ) are typically more difficult than comparing start-points (i.e., t 1 start vs. t 2 start ); we attribute this to the ambiguity of expressing and perceiving durations of events (Coll-Florit and Gennari, 2011) .",
"We believe that this is an important consideration, and we propose in Sec.",
"3 that TempRel annotation should focus on start-points.",
"Using the proposed annotation scheme, a pilot study done by experts achieved a high IAA of .84 (Cohen's Kappa) on a subset of TB-Dense, in contrast to the conventional 60's.",
"In addition to the low IAA issue, TempRel annotation is also known to be labor intensive.",
"Our third contribution is that we facilitate, for the first time, the use of crowdsourcing to collect a new, high quality (under multiple metrics explained later) TempRel dataset.",
"We explain how the crowdsourcing quality was controlled and how vague relations were handled in Sec.",
"4, and present some statistics and the quality of the new dataset in Sec.",
"5.",
"A baseline system is also shown to achieve much better performance on the new dataset, when compared with system performance in the literature (Sec.",
"6).",
"The paper's results are very encouraging and hopefully, this work would significantly benefit research in this area.",
"Temporal Structure of Events Given a set of events, one important question in designing the TempRel annotation task is: which pairs of events should have a relation?",
"The answer to it depends on the modeling of the overall temporal structure of events.",
"Motivation TimeBank (Pustejovsky et al., 2003b) laid the foundation for many later TempRel corpora, e.g., (Bethard et al., 2007; UzZaman et al., 2013; Cas-sidy et al., 2014) .",
"2 In TimeBank, the annotators were allowed to label TempRels between any pairs of events.",
"This setup models the overall structure of events using a general graph, which made annotators inadvertently overlook some pairs, resulting in low IAAs and many false negatives.",
"Example 4: Dense Annotation Scheme.",
"Serbian police (e7:tried) to (e8:eliminate) the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Given 4 NON-GENERIC events above, the dense scheme presents 6 pairs to annotators one by one: (e7, e8), (e7, e1), (e7, e2), (e8, e1), (e8, e2), and (e1, e2).",
"Apparently, not all pairs are well-defined, e.g., (e8, e2) and (e1, e2), but annotators are forced to label all of them.",
"To address this issue, proposed a dense annotation scheme, TB-Dense, which annotates all event pairs within a sliding, two-sentence window (see Example 4).",
"It requires all TempRels between GENERIC 3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NON-GENERIC and the other one for the GENERIC.",
"However, as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts.",
"It is reasonable to argue that the attempt to e1:restore order happened before the conflict where 51 people were e2:killed; or, 51 people had been killed but order had not been restored yet, so e1:restore is after e2:killed.",
"Similarly, in Example 2, service industries and manufacturers were originally expected to be hardest e4:hit but actually e3:showed gains, so e4:hit is before e3:showed; however, one can also argue that the two areas had showed gains but had not been hit, so e4:hit is after e3:showed.",
"Again, e5:rebuilding is a hypothetical event: \"we will act if rebuilding is true\".",
"Readers do not know for sure if \"he is already rebuilding weapons but we have no evidence\", or \"he will be building weapons in the future\", so annotators may disagree on the relation between e5:rebuilding and e6:responded.",
"Despite, importantly, minimizing missing annota-2 EventTimeCorpus (Reimers et al., 2016) is based on TimeBank, but aims at anchoring events onto explicit time expressions in each document rather than annotating TempRels between events, which can be a good complementary to other TempRel datasets.",
"3 For example, lions eat meat is GENERIC.",
"tions, the current dense scheme forces annotators to label many such ill-defined pairs, resulting in low IAA.",
"Multi-Axis Modeling Arguably, an ideal annotator may figure out the above ambiguity by him/herself and mark them as vague, but it is not a feasible requirement for all annotators to stay clear-headed for hours; let alone crowdsourcers.",
"What makes things worse is that, after annotators spend a long time figuring out these difficult cases, whether they disagree with each other or agree on the vagueness, the final decisions for such cases will still be vague.",
"As another way to handle this dilemma, TB-Dense resorted to a 80% confidence rule: annotators were allowed to choose a label if one is 80% sure that it was the writer's intent.",
"However, as pointed out by TB-Dense, annotators are likely to have rather different understandings of 80% confidence and it will still end up with disagreements.",
"In contrast to these annotation difficulties, humans can easily grasp the meaning of news articles, implying a potential gap between the difficulty of the annotation task and the one of understanding the actual meaning of the text.",
"In Examples 1-3, the writers did not intend to explain the TempRels between those pairs, and the original annotators of TimeBank 4 did not label relations between those pairs either, which indicates that both writers and readers did not think the TempRels between these pairs were crucial.",
"Instead, what is crucial in these examples is that \"Serbian police tried to restore order but killed 51 people\", that \"two areas were expected to be hit but showed gains\", and that \"if he rebuilds weapons then we will act.\"",
"To \"restore order\", to be \"hardest hit\", and \"if he was rebuilding\" were only the intention of police, the opinion of economists, and the condition to act, respectively, and whether or not they actually happen is not the focus of those writers.",
"This discussion suggests that a single axis is too restrictive to represent the complex structure of NON-GENERIC events.",
"Instead, we need a modeling which is more restrictive than a general graph so that annotators can focus on relation annotation (rather than looking for pairs first), but also more flexible than a single axis so that ill-defined 4 Recall that they were given the entire article and only salient relations would be annotated.",
"relations are not forcibly annotated.",
"Specifically, we need axes for intentions, opinions, hypotheses, etc.",
"in addition to the main axis of an article.",
"We thus argue for multi-axis modeling, as defined in Table 1 .",
"Following the proposed modeling, Examples 1-3 can be represented as in Fig.",
"1 .",
"This modeling aims at capturing what the author has explicitly expressed and it only asks annotators to look at comparable pairs, rather than forcing them to make decisions on often vaguely defined pairs.",
"In practice, we annotate one axis at a time: we first classify if an event is anchorable onto a given axis (this is also called the anchorability annotation step); then we annotate every pair of anchorable events (i.e., the relation annotation step); finally, we can move to another axis and repeat the two steps above.",
"Note that ruling out cross-axis relations is only a strategy we adopt in this paper to separate well-defined relations from ill-defined relations.",
"We do not claim that cross-axis relations are unimportant; instead, as shown in Fig.",
"2 , we think that cross-axis relations are a different semantic phenomenon that requires additional investigation.",
"Comparisons with Existing Work There have been other proposals of temporal structure modelings (Bramsen et al., 2006; Bethard et al., 2012) , but in general, the semantic phenomena handled in our work are very different and complementary to them.",
"(Bramsen et al., 2006) introduces \"temporal segments\" (a fragment of text that does not exhibit abrupt changes) in the medical domain.",
"Similarly, their temporal segments can also be considered as a special temporal structure modeling.",
"But a key difference is that (Bramsen et al., 2006) only annotates inter-segment relations, ignoring intra-segment ones.",
"Since those segments are usually large chunks of text, the semantics handled in (Bramsen et al., 2006) is in a very coarse granularity (as pointed out by (Bramsen et al., 2006) ) and is thus different from ours.",
"(Bethard et al., 2012) proposes a tree structure for children's stories, which \"typically have simpler temporal structures\", as they pointed out.",
"Moreover, in their annotation, an event can only be linked to a single nearby event, even if multiple nearby events may exist, whereas we do not have such restrictions.",
"In addition, some of the semantic phenomena in Table 1 have been discussed in existing work.",
"Here we compare with them for a better positioning of the proposed scheme.",
"Axis Projection TB-Dense handled the incomparability between main-axis events and HYPOTHESIS/NEGATION by treating an event as having occurred if the event is HYPOTHESIS/NEGATION.",
"5 In our multiaxis modeling, the strategy adopted by TB-Dense falls into a more general approach, \"axis projection\".",
"That is, projecting events across different axes to handle the incomparability between any two axes (not limited to HYPOTHE-SIS/NEGATION).",
"Axis projection works well for certain event pairs like Asian crisis and e4:hardest hit in Example 2: as in Fig.",
"1 , Asian crisis is before expected, which is again before e4:hardest hit, so Asian crisis is before e4:hardest hit.",
"Generally, however, since there is no direct evidence that can guide the projection, annotators may have different projections (imagine projecting e5:rebuilding onto the main axis: is it in the past or in the future?).",
"As a result, axis projec-tion requires many specially designed guidelines or strong external knowledge.",
"Annotators have to rigidly follow the sometimes counter-intuitive guidelines or \"guess\" a label instead of looking for evidence in the text.",
"When strong external knowledge is involved in axis projection, it becomes a reasoning process and the resulting relations are a different type.",
"For example, a reader may reason that in Example 3, it is well-known that they did \"act again\", implying his e5:rebuilding had happened and is before e6:responded.",
"Another example is in Fig.",
"2 .",
"It is obvious that relations based on these projections are not the same with and more challenging than those same-axis relations, so in the current stage, we should focus on same-axis relations only.",
"tended the conference, the projection of submit a paper onto the main axis is clearly before attended.",
"However, this projection requires strong external knowledge that a paper should be submitted before attending a conference.",
"Again, this projection is only a guess based on our external knowledge and it is still open whether the paper is submitted or not.",
"Introduction of the Orthogonal Axes Another prominent difference to earlier work is the introduction of orthogonal axes, which has not been used in any existing work as we know.",
"A special property is that the intersection event of two axes can be compared to events from both, which can sometimes bridge events, e.g., in Fig.",
"1 , Asian crisis is seemingly before hardest hit due to their connections to expected.",
"Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well.",
"However, the \"hardest hit\" in \"Asian crisis before hardest hit\" is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true.",
"Nevertheless, OPINIONS are not always true and INTENTIONS are not always fulfilled.",
"In Example 5, e9:sponsoring and e10:resolve are the opinions of the West and the speaker, respectively; whether or not they are true depends on the au-thors' implications or the readers' understandings, which is often beyond the scope of TempRel annotation.",
"6 Example 6 demonstrates a similar situation for INTENTIONS: when reading the sentence of e11:report, people are inclined to believe that it is fulfilled.",
"But if we read the sentence of e12:report, we have reason to believe that it is not.",
"When it comes to e13:tell, it is unclear if everyone told the truth.",
"The existence of such examples indicates that orthogonal axes are a better modeling for INTENTIONS and OPINIONS.",
"Example 5: Opinion events may not always be true.",
"He is ostracized by the West for (e9:sponsoring) terrorism.",
"We need to (e10:resolve) the deep-seated causes that have resulted in these problems.",
"Example 6: Intentions may not always be fulfilled.",
"A passerby called the police to (e11:report) the body.",
"A passerby called the police to (e12:report) the body.",
"Unfortunately, the line was busy.",
"I asked everyone to (e13:tell) the truth.",
"Differences from Factuality Event modality have been discussed in many existing event annotation schemes, e.g., Event Nugget (Mitamura et al., 2015) , Rich ERE , and RED.",
"Generally, an event is classified as Actual or Non-Actual, a.k.a.",
"factuality (Saurí and Pustejovsky, 2009; Lee et al., 2015) .",
"The main-axis events defined in this paper seem to be very similar to Actual events, but with several important differences: First, future events are Non-Actual because they indeed have not happened, but they may be on the main axis.",
"Second, events that are not on the main axis can also be Actual events, e.g., intentions that are fulfilled, or opinions that are true.",
"Third, as demonstrated by Examples 5-6, identifying anchorability as defined in Table 1 is relatively easy, but judging if an event actually happened is often a high-level understanding task that requires an understanding of the entire document or external knowledge.",
"Interested readers are referred to Appendix B for a detailed analysis of the difference between Anchorable (onto the main axis) and Actual on a subset of RED.",
"Interval Splitting All existing annotation schemes adopt the interval representation of events (Allen, 1984) and there are 13 relations between two intervals (for readers who are not familiar with it, please see Fig.",
"4 in the appendix).",
"To reduce the burden of annotators, existing schemes often resort to a reduced set of the 13 relations.",
"For instance, Verhagen et al.",
"(2007) Let [t 1 start , t 1 end ] and [t 2 start , t 2 end ] be the time intervals of two events (with the implicit assumption that t start t end ).",
"Instead of reducing the relations between two intervals, we try to explicitly compare the time points (see Fig.",
"3 ).",
"In this way, the label set is simply before, after and equal, 7 while the expressivity remains the same.",
"This interval splitting technique has also been used in (Raghavan et al., 2012) .",
"Figure 3 : The comparison of two event time intervals, [t 1 start , t 1 end ] and [t 2 start , t 2 end ], can be decomposed into four comparisons t 1 start vs. t 2 start , t 1 start vs. t 2 end , t 1 end vs. t 2 start , and t 1 end vs. t 2 end , without loss of generality.",
"[!",
"\"#$%# & , !",
"()* & ] [!",
"\"#$%# + , !",
"()* + ] time In addition to same expressivity, interval splitting can provide even more information when the relation between two events is vague.",
"In the conventional setting, imagine that the annotators find that the relation between two events can be either before or before and overlap.",
"Then the resulting annotation will have to be vague, although the annotators actually agree on the relation between t 1 start and t 2 start .",
"Using interval splitting, however, such information can be preserved.",
"An obvious downside of interval splitting is the increased number of annotations needed (4 point comparisons vs. 1 interval comparison).",
"In practice, however, it is usually much fewer than 4 comparisons.",
"For example, when we see t 1 end < t 2 start (as in Fig.",
"3) , the other three can be skipped because they can all be inferred.",
"Moreover, although the number of annotations is increased, the work load for human annotators may still be the same, because even in the conventional scheme, they still need to think of the relations between start-and end-points before they can make a decision.",
"Ambiguity of End-Points During our pilot annotation, the annotation quality dropped significantly when the annotators needed to reason about relations involving end-points of events.",
"Table 2 shows four metrics of task difficulty when only t 1 start vs. t 2 start or t 1 end vs. t 2 end are annotated.",
"Non-anchorable events were removed for both jobs.",
"The first two metrics, qualifying pass rate and survival rate are related to the two quality control protocols (see Sec.",
"4.1 for details).",
"We can see that when annotating the relations between end-points, only one out of ten crowdsourcers (11%) could successfully pass our qualifying test; and even if they had passed it, half of them (56%) would have been kicked out in the middle of the task.",
"The third line is the overall accuracy on gold set from all crowdsourcers (excluding those who did not pass the qualifying test), which drops from 67% to 37% when annotating end-end relations.",
"The last line is the average response time per annotation and we can see that it takes much longer to label an end-end TempRel (52s) than a start-start TempRel (33s).",
"This important discovery indicates that the TempRels between end-points is probably governed by a different linguistic phenomenon.",
"Table 2 : Annotations involving the end-points of events are found to be much harder than only comparing the start-points.",
"We hypothesize that the difficulty is a mixture of how durative events are expressed (by authors) and perceived (by readers) in natural language.",
"In cognitive psychology, Coll-Florit and Gennari (2011) discovered that human readers take longer to perceive durative events than punctual events, e.g., owe 50 bucks vs. lost 50 bucks.",
"From the writer's standpoint, durations are usually fuzzy (Schockaert and De Cock, 2008) , or assumed to be a prior knowledge of readers (e.g., college takes 4 years and watching an NBA game takes a few hours), and thus not always written explicitly.",
"Given all these reasons, we ignore the comparison of end-points in this work, although event duration is indeed, another important task.",
"Annotation Scheme Design To summarize, with the proposed multi-axis modeling (Sec.",
"2) and interval splitting (Sec.",
"3), our annotation scheme is two-step.",
"First, we mark every event candidate as being temporally Anchorable or not (based on the time axis we are working on).",
"Second, we adopt the dense annotation scheme to label TempRels only between Anchorable events.",
"Note that we only work on verb events in this paper, so non-verb event candidates are also deleted in a preprocessing step.",
"We design crowdsourcing tasks for both steps and as we show later, high crowdsourcing quality was achieved on both tasks.",
"In this section, we will discuss some practical issues.",
"Quality Control for Crowdsourcing We take advantage of the quality control feature in CrowdFlower in our crowdsourcing jobs.",
"For any job, a set of examples are annotated by experts beforehand, which is considered gold and will serve two purposes.",
"(i) Qualifying test: Any crowdsourcer who wants to work on this job has to pass with 70% accuracy on 10 questions randomly selected from the gold set.",
"(ii) Surviving test: During the annotation process, questions from the gold set will be randomly given to crowdsourcers without notice, and one has to maintain 70% accuracy on the gold set till the end of the annotation; otherwise, he or she will be forbidden from working on this job anymore and all his/her annotations will be discarded.",
"At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision.",
"Vague Relations How to handle vague relations is another issue in temporal annotation.",
"In non-dense schemes, annotators usually skip the annotation of a vague pair.",
"In dense schemes, a majority agreement rule is applied as a postprocessing step to back off a decision to vague when annotators cannot pass a majority vote , which reminds us that annotators often label a vague relation as non-vague due to lack of thinking.",
"We decide to proactively reduce the possibility of such situations.",
"As mentioned earlier, our label set for t 1 start vs. t 2 start is before, after, equal and vague.",
"We ask two questions: Q1=Is it possible that t 1 start is before t 2 start ?",
"Q2=Is it possible that t 2 start is before t 1 start ?",
"Let the an-swers be A1 and A2.",
"Then we have a oneto-one mapping as follows: A1=A2=yes7 !vague, A1=A2=no7 !equal, A1=yes, A2=no7 !before, and A1=no, A2=yes7 !after.",
"An advantage is that one will be prompted to think about all possibilities, thus reducing the chance of overlook.",
"Finally, the annotation interface we used is shown in Appendix C. Corpus Statistics and Quality In this section, we first focus on annotations on the main axis, which is usually the primary storyline and thus has most events.",
"Before launching the crowdsourcing tasks, we checked the IAA between two experts on a subset of TB-Dense (about 100 events and 400 relations).",
"A Cohen's Kappa of .85 was achieved in the first step: anchorability annotation.",
"Only those events that both experts labeled Anchorable were kept before they moved onto the second step: relation annotation, for which the Cohen's Kappa was .90 for Q1 and .87 for Q2.",
"Table 3 furthermore shows the distribution, Cohen's Kappa, and F 1 of each label.",
"We can see the Kappa and F 1 of vague (=.75, F 1 =.81) are generally lower than those of the other labels, confirming that temporal vagueness is a more difficult semantic phenomenon.",
"Nevertheless, the overall IAA shown in Table 3 is a significant improvement compared to existing datasets.",
"With the improved IAA confirmed by experts, we sequentially launched the two-step crowdsourcing tasks through CrowdFlower on top of the same 36 documents of TB-Dense.",
"To evaluate how well the crowdsourcers performed on our task, we calculate two quality metrics: accuracy on the gold set and the Worker Agreement with Aggregate (WAWA).",
"WAWA indicates the average number of crowdsourcers' responses agreed with the aggregate answer (we used majority aggregation for each question).",
"For example, if N individual responses were obtained in total, and n of them were correct when compared to the aggregate answer, then WAWA is simply n/N .",
"In the first step, crowdsourcers labeled 28% of the events as Non-Anchorable to the main axis, with an accuracy on the gold of .86 and a WAWA of .79.",
"With Non-Anchorable events filtered, the relation annotation step was launched as another crowdsourcing task.",
"The label distribution is b=.",
"50, a=.28, e=.03, and v=.19 (consistent with Table 3 ).",
"In Table 4 , we show the annotation quality of this step using accuracy on the gold set and WAWA.",
"We can see that the crowdsourcers achieved a very good performance on the gold set, indicating that they are consistent with the authors who created the gold set; these crowdsourcers also achieved a high-level agreement under the WAWA metric, indicating that they are consistent among themselves.",
"These two metrics indicate that the annotation task is now well-defined and easy to understand even by non-experts.",
"Table 4 : Quality analysis of the relation annotation step of MATRES.",
"\"Q1\" and \"Q2\" refer to the two questions crowdsourcers were asked (see Sec.",
"4.2 for details).",
"Line 1 measures the level of consistency between crowdsourcers and the authors and line 2 measures the level of consistency among the crowdsourcers themselves.",
"We continued to annotate INTENTION and OPINION which create orthogonal branches on the main axis.",
"In the first step, crowdsourcers achieved an accuracy on gold of .82 and a WAWA of .89.",
"Since only 16% of the events are in this category and these axes are usually very short (e.g., allocate funds to build a museum.",
"), the annotation task is relatively small and two experts took the second step and achieved an agreement of .86 (F 1 ).",
"We name our new dataset MATRES for Multi-Axis Temporal RElations for Start-points.",
"Each individual judgement cost us $0.01 and MATRES in total cost about $400 for 36 documents.",
"Comparison to TB-Dense To get another checkpoint of the quality of the new dataset, we compare with the annotations of TB-Dense.",
"TB-Dense has 1.1K verb events, between which 3.4K event-event (EE) relations are annotated.",
"In the new dataset, 72% of the events (0.8K) are anchored onto the main axis, resulting in 1.6K EE relations, and 16% (0.2K) are anchored onto orthogonal axes, resulting in 0.2K EE relations.",
"The following comparison is based on the 1.8K EE relations in common.",
"Moreover, since TB-Dense annotations are for intervals instead of start-points only, we converted TB-Dense's interval relations to start-point relations (e.g., if A includes B, then t A start is before t B start ).",
"The confusion matrix is shown in Table 5 .",
"A few remarks about how to understand it: First, when TB-Dense labels before or after, MATRES also has a high-probability of having the same label (b=455/513=.89, a=309/438=.71); when MATRES labels vague, TB-Dense is also very likely to label vague (v=192/312=.62).",
"This indicates the high agreement level between the two datasets if the interval-or point-based annotation difference is ruled out.",
"Second, many vague relations in TB-Dense are labeled as before, after or equal in MATRES.",
"This is expected because TB-Dense annotates relations between intervals, while MATRES annotates start-points.",
"When durative events are involved, the problem usually becomes more difficult and interval-based annotation is more likely to label vague (see earlier discussions in Sec.",
"3).",
"Example 7 shows three typical cases, where e14:became, e17:backed, e18:rose and e19:extending can be considered durative.",
"If only their start-points are considered, the crowdsourcers were correct in labeling e14 before e15, e16 after e17, and e18 equal to e19, although TB-Dense says vague for all of them.",
"Third, equal seems to be the relation that the two dataset mostly disagree on, which is probably due to crowdsourcers' lack of understanding in time granularity and event coreference.",
"Although equal relations only constitutes a small portion in all relations, it needs further investigation.",
"Baseline System We develop a baseline system for TempRel extraction on MATRES, assuming that all the events and axes are given.",
"The following commonly-Example 7: Typical cases that TB-Dense annotated vague but MATRES annotated before, after, and equal, respectively.",
"At one point , when it (e14:became) clear controllers could not contact the plane, someone (e15:said) a prayer.",
"TB-Dense: vague; MATRES: before The US is bolstering its military presence in the gulf, as President Clinton (e16:discussed) the Iraq crisis with the one ally who has (e17:backed) his threat of force, British prime minister Tony Blair.",
"TB-Dense: vague; MATRES: after Average hourly earnings of nonsupervisory employees (e18:rose) to $12.51.",
"The gain left wages 3.8 percent higher than a year earlier, (e19:extending) a trend that has given back to workers some of the earning power they lost to inflation in the last decade.",
"TB-Dense: vague; MATRES: equal used features for each event pair are used: (i) The part-of-speech (POS) tags of each individual event and of its neighboring three words.",
"(ii) The sentence and token distance between the two events.",
"(iii) The appearance of any modal verb between the two event mentions in text (i.e., will, would, can, could, may and might).",
"(iv) The appearance of any temporal connectives between the two event mentions (e.g., before, after and since).",
"(v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998) .",
"(vi) Whether the input event mentions have a common derivational form derived from WordNet.",
"(vii) The head words of the preposition phrases that cover each event, respectively.",
"And (viii) event properties such as Aspect, Modality, and Polarity that come with the TimeBank dataset and are commonly used as features.",
"The proposed baseline system uses the averaged perceptron algorithm to classify the relation between each event pair into one of the four relation types.",
"We adopted the same train/dev/test split of TB-Dense, where there are 22 documents in train, 5 in dev, and 9 in test.",
"Parameters were tuned on the train-set to maximize its F 1 on the dev-set, after which the classifier was retrained on the union of train and dev.",
"A detailed analysis of the baseline system is provided in Table 6 .",
"The performance on equal and vague is lower than on before and after, probably due to shortage in these labels in the training data and the inherent difficulty in event coreference and temporal vagueness.",
"We can see, though, that the overall performance on MATRES is much better than those in the literature for TempRel extraction, which used to be in the low 50's Ning et al., 2017 ).",
"The same system was also retrained and tested on the original annotations of TB-Dense (Line \"Original\"), which confirms the significant improvement if the proposed annotation scheme is used.",
"Note that we do not mean to say that the proposed baseline system itself is better than other existing algorithms, but rather that the proposed annotation scheme and the resulting dataset lead to better defined machine learning tasks.",
"In the future, more data can be collected and used with advanced techniques such as ILP (Do et al., 2012) , structured learning (Ning et al., 2017) or multi-sieve .",
"Table 6 : Performance of the proposed baseline system on MATRES.",
"Line \"Original\" is the same system retrained on the original TB-Dense and tested on the same subset of event pairs.",
"Due to the limited number of equal examples, the system did not make any equal predictions on the testset.",
"Conclusion This paper proposes a new scheme for TempRel annotation between events, simplifying the task by focusing on a single time axis at a time.",
"We have also identified that end-points of events is a major source of confusion during annotation due to reasons beyond the scope of TempRel annotation, and proposed to focus on start-points only and handle the end-points issue in further investigation (e.g., in event duration annotation tasks).",
"Pilot study by expert annotators shows significant IAA improvements compared to literature values, indicating a better task definition under the proposed scheme.",
"This further enables the usage of crowdsourcing to collect a new dataset, MATRES, at a lower time cost.",
"Analysis shows that MATRES, albeit crowdsourced, has achieved a reasonably good agreement level, as confirmed by its performance on the gold set (agreement with the authors), the WAWA metric (agreement with the crowdsourcers themselves), and consistency with TB-Dense (agreement with an existing dataset).",
"Given the fact that existing schemes suffer from low IAAs and lack of data, we hope that the findings in this work would provide a good start towards understanding more sophisticated semantic phenomena in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.3.1",
"2.3.2",
"2.3.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Temporal Structure of Events",
"Motivation",
"Multi-Axis Modeling",
"Comparisons with Existing Work",
"Axis Projection",
"Introduction of the Orthogonal Axes",
"Differences from Factuality",
"Interval Splitting",
"Ambiguity of End-Points",
"Annotation Scheme Design",
"Quality Control for Crowdsourcing",
"Vague Relations",
"Corpus Statistics and Quality",
"Comparison to TB-Dense",
"Baseline System",
"Conclusion"
]
} | GEM-SciDuet-train-123#paper-1336#slide-1 | Time is important | [June, 1989] Chris Robin lives in England and he is the person that you read about in Winnie the Pooh. As a boy, Chris lived in
Cotchfield Farm. When he was three, his father wrote a poem about him. His father later wrote Winnie the Pooh in 1925.
Where did Chris Robin live?
This is time sensitive.
When was Chris Robin born? poem [Chris at age 3]
Requires identifying relations between events, and temporal reasoning.
Temporal relation extraction Time could be expressed implicitly
A happens BEFORE/AFTER B;
Events are associated with time intervals:
12 temporal relations in every 100 tokens (in TempEval3 datasets) | [June, 1989] Chris Robin lives in England and he is the person that you read about in Winnie the Pooh. As a boy, Chris lived in
Cotchfield Farm. When he was three, his father wrote a poem about him. His father later wrote Winnie the Pooh in 1925.
Where did Chris Robin live?
This is time sensitive.
When was Chris Robin born? poem [Chris at age 3]
Requires identifying relations between events, and temporal reasoning.
Temporal relation extraction Time could be expressed implicitly
A happens BEFORE/AFTER B;
Events are associated with time intervals:
12 temporal relations in every 100 tokens (in TempEval3 datasets) | [] |
GEM-SciDuet-train-123#paper-1336#slide-2 | 1336 | A Multi-Axis Annotation Scheme for Event Temporal Relations | Existing temporal relation (TempRel) annotation schemes often have low interannotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition. This paper proposes a new multi-axis modeling to better capture the temporal structure of events. In addition, we identify that event end-points are a major source of confusion in annotation, so we also propose to annotate TempRels based on start-points only. A pilot expert annotation effort using the proposed scheme shows significant improvement in IAA from the conventional 60's to 80's (Cohen's Kappa). This better-defined annotation scheme further enables the use of crowdsourcing to alleviate the labor intensity for each annotator. We hope that this work can foster more interesting studies towards event understanding. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259
],
"paper_content_text": [
"Introduction Temporal relation (TempRel) extraction is an important task for event understanding, and it has drawn much attention in the natural language processing (NLP) community recently (UzZaman et al., 2013; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015 Bethard et al., , 2016 Bethard et al., , 2017 Leeuwenberg and Moens, 2017; Ning et al., 2017 Ning et al., , 2018a .",
"Initiated by TimeBank (TB) (Pustejovsky et al., 2003b) , a number of TempRel datasets have been collected, including but not limited to the verbclause augmentation to TB (Bethard et al., 2007) , TempEval1-3 (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , TimeBank-Dense (TB-Dense) , EventTimeCorpus (Reimers et al., 2016) , and datasets with both temporal and other types of relations (e.g., coreference and causality) such as CaTeRs (Mostafazadeh et al., 2016) and RED (O'Gorman et al., 2016) .",
"These datasets were annotated by experts, but most still suffered from low inter-annotator agreements (IAA).",
"For instance, the IAAs of TB-Dense, RED and THYME-TimeML (Styler IV et al., 2014) were only below or near 60% (given that events are already annotated).",
"Since a low IAA usually indicates that the task is difficult even for humans (see Examples 1-3), the community has been looking into ways to simplify the task, by reducing the label set, and by breaking up the overall, complex task into subtasks (e.g., getting agreement on which event pairs should have a relation, and then what that relation should be) (Mostafazadeh et al., 2016; O'Gorman et al., 2016) .",
"In contrast to other existing datasets, Bethard et al.",
"(2007) achieved an agreement as high as 90%, but the scope of its annotation was narrowed down to a very special verb-clause structure.",
"(e1, e2), (e3, e4), and (e5, e6): TempRels that are difficult even for humans.",
"Note that only relevant events are highlighted here.",
"Example 1: Serbian police tried to eliminate the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Example 2: Service industries (e3:showed) solid job gains, as did manufacturers, two areas expected to be hardest (e4:hit) when the effects of the Asian crisis hit the American economy.",
"Example 3: We will act again if we have evidence he is (e5:rebuilding) his weapons of mass destruction capabilities, senior officials say.",
"In a bit of television diplomacy, Iraq's deputy foreign minister (e6:responded) from Baghdad in less than one hour, saying that .",
".",
".",
"This paper proposes a new approach to handling these issues in TempRel annotation.",
"First, we introduce multi-axis modeling to represent the temporal structure of events, based on which we anchor events to different semantic axes; only events from the same axis will then be temporally compared (Sec.",
"2).",
"As explained later, those event pairs in Examples 1-3 are difficult because they represent different semantic phenomena and belong to different axes.",
"Second, while we represent an event pair using two time intervals (say, [t 1 start , t 1 end ] and [t 2 start , t 2 end ]), we suggest that comparisons involving end-points (e.g., t 1 end vs. t 2 end ) are typically more difficult than comparing start-points (i.e., t 1 start vs. t 2 start ); we attribute this to the ambiguity of expressing and perceiving durations of events (Coll-Florit and Gennari, 2011) .",
"We believe that this is an important consideration, and we propose in Sec.",
"3 that TempRel annotation should focus on start-points.",
"Using the proposed annotation scheme, a pilot study done by experts achieved a high IAA of .84 (Cohen's Kappa) on a subset of TB-Dense, in contrast to the conventional 60's.",
"In addition to the low IAA issue, TempRel annotation is also known to be labor intensive.",
"Our third contribution is that we facilitate, for the first time, the use of crowdsourcing to collect a new, high quality (under multiple metrics explained later) TempRel dataset.",
"We explain how the crowdsourcing quality was controlled and how vague relations were handled in Sec.",
"4, and present some statistics and the quality of the new dataset in Sec.",
"5.",
"A baseline system is also shown to achieve much better performance on the new dataset, when compared with system performance in the literature (Sec.",
"6).",
"The paper's results are very encouraging and hopefully, this work would significantly benefit research in this area.",
"Temporal Structure of Events Given a set of events, one important question in designing the TempRel annotation task is: which pairs of events should have a relation?",
"The answer to it depends on the modeling of the overall temporal structure of events.",
"Motivation TimeBank (Pustejovsky et al., 2003b) laid the foundation for many later TempRel corpora, e.g., (Bethard et al., 2007; UzZaman et al., 2013; Cas-sidy et al., 2014) .",
"2 In TimeBank, the annotators were allowed to label TempRels between any pairs of events.",
"This setup models the overall structure of events using a general graph, which made annotators inadvertently overlook some pairs, resulting in low IAAs and many false negatives.",
"Example 4: Dense Annotation Scheme.",
"Serbian police (e7:tried) to (e8:eliminate) the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Given 4 NON-GENERIC events above, the dense scheme presents 6 pairs to annotators one by one: (e7, e8), (e7, e1), (e7, e2), (e8, e1), (e8, e2), and (e1, e2).",
"Apparently, not all pairs are well-defined, e.g., (e8, e2) and (e1, e2), but annotators are forced to label all of them.",
"To address this issue, proposed a dense annotation scheme, TB-Dense, which annotates all event pairs within a sliding, two-sentence window (see Example 4).",
"It requires all TempRels between GENERIC 3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NON-GENERIC and the other one for the GENERIC.",
"However, as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts.",
"It is reasonable to argue that the attempt to e1:restore order happened before the conflict where 51 people were e2:killed; or, 51 people had been killed but order had not been restored yet, so e1:restore is after e2:killed.",
"Similarly, in Example 2, service industries and manufacturers were originally expected to be hardest e4:hit but actually e3:showed gains, so e4:hit is before e3:showed; however, one can also argue that the two areas had showed gains but had not been hit, so e4:hit is after e3:showed.",
"Again, e5:rebuilding is a hypothetical event: \"we will act if rebuilding is true\".",
"Readers do not know for sure if \"he is already rebuilding weapons but we have no evidence\", or \"he will be building weapons in the future\", so annotators may disagree on the relation between e5:rebuilding and e6:responded.",
"Despite, importantly, minimizing missing annota-2 EventTimeCorpus (Reimers et al., 2016) is based on TimeBank, but aims at anchoring events onto explicit time expressions in each document rather than annotating TempRels between events, which can be a good complementary to other TempRel datasets.",
"3 For example, lions eat meat is GENERIC.",
"tions, the current dense scheme forces annotators to label many such ill-defined pairs, resulting in low IAA.",
"Multi-Axis Modeling Arguably, an ideal annotator may figure out the above ambiguity by him/herself and mark them as vague, but it is not a feasible requirement for all annotators to stay clear-headed for hours; let alone crowdsourcers.",
"What makes things worse is that, after annotators spend a long time figuring out these difficult cases, whether they disagree with each other or agree on the vagueness, the final decisions for such cases will still be vague.",
"As another way to handle this dilemma, TB-Dense resorted to a 80% confidence rule: annotators were allowed to choose a label if one is 80% sure that it was the writer's intent.",
"However, as pointed out by TB-Dense, annotators are likely to have rather different understandings of 80% confidence and it will still end up with disagreements.",
"In contrast to these annotation difficulties, humans can easily grasp the meaning of news articles, implying a potential gap between the difficulty of the annotation task and the one of understanding the actual meaning of the text.",
"In Examples 1-3, the writers did not intend to explain the TempRels between those pairs, and the original annotators of TimeBank 4 did not label relations between those pairs either, which indicates that both writers and readers did not think the TempRels between these pairs were crucial.",
"Instead, what is crucial in these examples is that \"Serbian police tried to restore order but killed 51 people\", that \"two areas were expected to be hit but showed gains\", and that \"if he rebuilds weapons then we will act.\"",
"To \"restore order\", to be \"hardest hit\", and \"if he was rebuilding\" were only the intention of police, the opinion of economists, and the condition to act, respectively, and whether or not they actually happen is not the focus of those writers.",
"This discussion suggests that a single axis is too restrictive to represent the complex structure of NON-GENERIC events.",
"Instead, we need a modeling which is more restrictive than a general graph so that annotators can focus on relation annotation (rather than looking for pairs first), but also more flexible than a single axis so that ill-defined 4 Recall that they were given the entire article and only salient relations would be annotated.",
"relations are not forcibly annotated.",
"Specifically, we need axes for intentions, opinions, hypotheses, etc.",
"in addition to the main axis of an article.",
"We thus argue for multi-axis modeling, as defined in Table 1 .",
"Following the proposed modeling, Examples 1-3 can be represented as in Fig.",
"1 .",
"This modeling aims at capturing what the author has explicitly expressed and it only asks annotators to look at comparable pairs, rather than forcing them to make decisions on often vaguely defined pairs.",
"In practice, we annotate one axis at a time: we first classify if an event is anchorable onto a given axis (this is also called the anchorability annotation step); then we annotate every pair of anchorable events (i.e., the relation annotation step); finally, we can move to another axis and repeat the two steps above.",
"Note that ruling out cross-axis relations is only a strategy we adopt in this paper to separate well-defined relations from ill-defined relations.",
"We do not claim that cross-axis relations are unimportant; instead, as shown in Fig.",
"2 , we think that cross-axis relations are a different semantic phenomenon that requires additional investigation.",
"Comparisons with Existing Work There have been other proposals of temporal structure modelings (Bramsen et al., 2006; Bethard et al., 2012) , but in general, the semantic phenomena handled in our work are very different and complementary to them.",
"(Bramsen et al., 2006) introduces \"temporal segments\" (a fragment of text that does not exhibit abrupt changes) in the medical domain.",
"Similarly, their temporal segments can also be considered as a special temporal structure modeling.",
"But a key difference is that (Bramsen et al., 2006) only annotates inter-segment relations, ignoring intra-segment ones.",
"Since those segments are usually large chunks of text, the semantics handled in (Bramsen et al., 2006) is in a very coarse granularity (as pointed out by (Bramsen et al., 2006) ) and is thus different from ours.",
"(Bethard et al., 2012) proposes a tree structure for children's stories, which \"typically have simpler temporal structures\", as they pointed out.",
"Moreover, in their annotation, an event can only be linked to a single nearby event, even if multiple nearby events may exist, whereas we do not have such restrictions.",
"In addition, some of the semantic phenomena in Table 1 have been discussed in existing work.",
"Here we compare with them for a better positioning of the proposed scheme.",
"Axis Projection TB-Dense handled the incomparability between main-axis events and HYPOTHESIS/NEGATION by treating an event as having occurred if the event is HYPOTHESIS/NEGATION.",
"5 In our multiaxis modeling, the strategy adopted by TB-Dense falls into a more general approach, \"axis projection\".",
"That is, projecting events across different axes to handle the incomparability between any two axes (not limited to HYPOTHE-SIS/NEGATION).",
"Axis projection works well for certain event pairs like Asian crisis and e4:hardest hit in Example 2: as in Fig.",
"1 , Asian crisis is before expected, which is again before e4:hardest hit, so Asian crisis is before e4:hardest hit.",
"Generally, however, since there is no direct evidence that can guide the projection, annotators may have different projections (imagine projecting e5:rebuilding onto the main axis: is it in the past or in the future?).",
"As a result, axis projec-tion requires many specially designed guidelines or strong external knowledge.",
"Annotators have to rigidly follow the sometimes counter-intuitive guidelines or \"guess\" a label instead of looking for evidence in the text.",
"When strong external knowledge is involved in axis projection, it becomes a reasoning process and the resulting relations are a different type.",
"For example, a reader may reason that in Example 3, it is well-known that they did \"act again\", implying his e5:rebuilding had happened and is before e6:responded.",
"Another example is in Fig.",
"2 .",
"It is obvious that relations based on these projections are not the same with and more challenging than those same-axis relations, so in the current stage, we should focus on same-axis relations only.",
"tended the conference, the projection of submit a paper onto the main axis is clearly before attended.",
"However, this projection requires strong external knowledge that a paper should be submitted before attending a conference.",
"Again, this projection is only a guess based on our external knowledge and it is still open whether the paper is submitted or not.",
"Introduction of the Orthogonal Axes Another prominent difference to earlier work is the introduction of orthogonal axes, which has not been used in any existing work as we know.",
"A special property is that the intersection event of two axes can be compared to events from both, which can sometimes bridge events, e.g., in Fig.",
"1 , Asian crisis is seemingly before hardest hit due to their connections to expected.",
"Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well.",
"However, the \"hardest hit\" in \"Asian crisis before hardest hit\" is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true.",
"Nevertheless, OPINIONS are not always true and INTENTIONS are not always fulfilled.",
"In Example 5, e9:sponsoring and e10:resolve are the opinions of the West and the speaker, respectively; whether or not they are true depends on the au-thors' implications or the readers' understandings, which is often beyond the scope of TempRel annotation.",
"6 Example 6 demonstrates a similar situation for INTENTIONS: when reading the sentence of e11:report, people are inclined to believe that it is fulfilled.",
"But if we read the sentence of e12:report, we have reason to believe that it is not.",
"When it comes to e13:tell, it is unclear if everyone told the truth.",
"The existence of such examples indicates that orthogonal axes are a better modeling for INTENTIONS and OPINIONS.",
"Example 5: Opinion events may not always be true.",
"He is ostracized by the West for (e9:sponsoring) terrorism.",
"We need to (e10:resolve) the deep-seated causes that have resulted in these problems.",
"Example 6: Intentions may not always be fulfilled.",
"A passerby called the police to (e11:report) the body.",
"A passerby called the police to (e12:report) the body.",
"Unfortunately, the line was busy.",
"I asked everyone to (e13:tell) the truth.",
"Differences from Factuality Event modality have been discussed in many existing event annotation schemes, e.g., Event Nugget (Mitamura et al., 2015) , Rich ERE , and RED.",
"Generally, an event is classified as Actual or Non-Actual, a.k.a.",
"factuality (Saurí and Pustejovsky, 2009; Lee et al., 2015) .",
"The main-axis events defined in this paper seem to be very similar to Actual events, but with several important differences: First, future events are Non-Actual because they indeed have not happened, but they may be on the main axis.",
"Second, events that are not on the main axis can also be Actual events, e.g., intentions that are fulfilled, or opinions that are true.",
"Third, as demonstrated by Examples 5-6, identifying anchorability as defined in Table 1 is relatively easy, but judging if an event actually happened is often a high-level understanding task that requires an understanding of the entire document or external knowledge.",
"Interested readers are referred to Appendix B for a detailed analysis of the difference between Anchorable (onto the main axis) and Actual on a subset of RED.",
"Interval Splitting All existing annotation schemes adopt the interval representation of events (Allen, 1984) and there are 13 relations between two intervals (for readers who are not familiar with it, please see Fig.",
"4 in the appendix).",
"To reduce the burden of annotators, existing schemes often resort to a reduced set of the 13 relations.",
"For instance, Verhagen et al.",
"(2007) Let [t 1 start , t 1 end ] and [t 2 start , t 2 end ] be the time intervals of two events (with the implicit assumption that t start t end ).",
"Instead of reducing the relations between two intervals, we try to explicitly compare the time points (see Fig.",
"3 ).",
"In this way, the label set is simply before, after and equal, 7 while the expressivity remains the same.",
"This interval splitting technique has also been used in (Raghavan et al., 2012) .",
"Figure 3 : The comparison of two event time intervals, [t 1 start , t 1 end ] and [t 2 start , t 2 end ], can be decomposed into four comparisons t 1 start vs. t 2 start , t 1 start vs. t 2 end , t 1 end vs. t 2 start , and t 1 end vs. t 2 end , without loss of generality.",
"[!",
"\"#$%# & , !",
"()* & ] [!",
"\"#$%# + , !",
"()* + ] time In addition to same expressivity, interval splitting can provide even more information when the relation between two events is vague.",
"In the conventional setting, imagine that the annotators find that the relation between two events can be either before or before and overlap.",
"Then the resulting annotation will have to be vague, although the annotators actually agree on the relation between t 1 start and t 2 start .",
"Using interval splitting, however, such information can be preserved.",
"An obvious downside of interval splitting is the increased number of annotations needed (4 point comparisons vs. 1 interval comparison).",
"In practice, however, it is usually much fewer than 4 comparisons.",
"For example, when we see t 1 end < t 2 start (as in Fig.",
"3) , the other three can be skipped because they can all be inferred.",
"Moreover, although the number of annotations is increased, the work load for human annotators may still be the same, because even in the conventional scheme, they still need to think of the relations between start-and end-points before they can make a decision.",
"Ambiguity of End-Points During our pilot annotation, the annotation quality dropped significantly when the annotators needed to reason about relations involving end-points of events.",
"Table 2 shows four metrics of task difficulty when only t 1 start vs. t 2 start or t 1 end vs. t 2 end are annotated.",
"Non-anchorable events were removed for both jobs.",
"The first two metrics, qualifying pass rate and survival rate are related to the two quality control protocols (see Sec.",
"4.1 for details).",
"We can see that when annotating the relations between end-points, only one out of ten crowdsourcers (11%) could successfully pass our qualifying test; and even if they had passed it, half of them (56%) would have been kicked out in the middle of the task.",
"The third line is the overall accuracy on gold set from all crowdsourcers (excluding those who did not pass the qualifying test), which drops from 67% to 37% when annotating end-end relations.",
"The last line is the average response time per annotation and we can see that it takes much longer to label an end-end TempRel (52s) than a start-start TempRel (33s).",
"This important discovery indicates that the TempRels between end-points is probably governed by a different linguistic phenomenon.",
"Table 2 : Annotations involving the end-points of events are found to be much harder than only comparing the start-points.",
"We hypothesize that the difficulty is a mixture of how durative events are expressed (by authors) and perceived (by readers) in natural language.",
"In cognitive psychology, Coll-Florit and Gennari (2011) discovered that human readers take longer to perceive durative events than punctual events, e.g., owe 50 bucks vs. lost 50 bucks.",
"From the writer's standpoint, durations are usually fuzzy (Schockaert and De Cock, 2008) , or assumed to be a prior knowledge of readers (e.g., college takes 4 years and watching an NBA game takes a few hours), and thus not always written explicitly.",
"Given all these reasons, we ignore the comparison of end-points in this work, although event duration is indeed, another important task.",
"Annotation Scheme Design To summarize, with the proposed multi-axis modeling (Sec.",
"2) and interval splitting (Sec.",
"3), our annotation scheme is two-step.",
"First, we mark every event candidate as being temporally Anchorable or not (based on the time axis we are working on).",
"Second, we adopt the dense annotation scheme to label TempRels only between Anchorable events.",
"Note that we only work on verb events in this paper, so non-verb event candidates are also deleted in a preprocessing step.",
"We design crowdsourcing tasks for both steps and as we show later, high crowdsourcing quality was achieved on both tasks.",
"In this section, we will discuss some practical issues.",
"Quality Control for Crowdsourcing We take advantage of the quality control feature in CrowdFlower in our crowdsourcing jobs.",
"For any job, a set of examples are annotated by experts beforehand, which is considered gold and will serve two purposes.",
"(i) Qualifying test: Any crowdsourcer who wants to work on this job has to pass with 70% accuracy on 10 questions randomly selected from the gold set.",
"(ii) Surviving test: During the annotation process, questions from the gold set will be randomly given to crowdsourcers without notice, and one has to maintain 70% accuracy on the gold set till the end of the annotation; otherwise, he or she will be forbidden from working on this job anymore and all his/her annotations will be discarded.",
"At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision.",
"Vague Relations How to handle vague relations is another issue in temporal annotation.",
"In non-dense schemes, annotators usually skip the annotation of a vague pair.",
"In dense schemes, a majority agreement rule is applied as a postprocessing step to back off a decision to vague when annotators cannot pass a majority vote , which reminds us that annotators often label a vague relation as non-vague due to lack of thinking.",
"We decide to proactively reduce the possibility of such situations.",
"As mentioned earlier, our label set for t 1 start vs. t 2 start is before, after, equal and vague.",
"We ask two questions: Q1=Is it possible that t 1 start is before t 2 start ?",
"Q2=Is it possible that t 2 start is before t 1 start ?",
"Let the an-swers be A1 and A2.",
"Then we have a oneto-one mapping as follows: A1=A2=yes7 !vague, A1=A2=no7 !equal, A1=yes, A2=no7 !before, and A1=no, A2=yes7 !after.",
"An advantage is that one will be prompted to think about all possibilities, thus reducing the chance of overlook.",
"Finally, the annotation interface we used is shown in Appendix C. Corpus Statistics and Quality In this section, we first focus on annotations on the main axis, which is usually the primary storyline and thus has most events.",
"Before launching the crowdsourcing tasks, we checked the IAA between two experts on a subset of TB-Dense (about 100 events and 400 relations).",
"A Cohen's Kappa of .85 was achieved in the first step: anchorability annotation.",
"Only those events that both experts labeled Anchorable were kept before they moved onto the second step: relation annotation, for which the Cohen's Kappa was .90 for Q1 and .87 for Q2.",
"Table 3 furthermore shows the distribution, Cohen's Kappa, and F 1 of each label.",
"We can see the Kappa and F 1 of vague (=.75, F 1 =.81) are generally lower than those of the other labels, confirming that temporal vagueness is a more difficult semantic phenomenon.",
"Nevertheless, the overall IAA shown in Table 3 is a significant improvement compared to existing datasets.",
"With the improved IAA confirmed by experts, we sequentially launched the two-step crowdsourcing tasks through CrowdFlower on top of the same 36 documents of TB-Dense.",
"To evaluate how well the crowdsourcers performed on our task, we calculate two quality metrics: accuracy on the gold set and the Worker Agreement with Aggregate (WAWA).",
"WAWA indicates the average number of crowdsourcers' responses agreed with the aggregate answer (we used majority aggregation for each question).",
"For example, if N individual responses were obtained in total, and n of them were correct when compared to the aggregate answer, then WAWA is simply n/N .",
"In the first step, crowdsourcers labeled 28% of the events as Non-Anchorable to the main axis, with an accuracy on the gold of .86 and a WAWA of .79.",
"With Non-Anchorable events filtered, the relation annotation step was launched as another crowdsourcing task.",
"The label distribution is b=.",
"50, a=.28, e=.03, and v=.19 (consistent with Table 3 ).",
"In Table 4 , we show the annotation quality of this step using accuracy on the gold set and WAWA.",
"We can see that the crowdsourcers achieved a very good performance on the gold set, indicating that they are consistent with the authors who created the gold set; these crowdsourcers also achieved a high-level agreement under the WAWA metric, indicating that they are consistent among themselves.",
"These two metrics indicate that the annotation task is now well-defined and easy to understand even by non-experts.",
"Table 4 : Quality analysis of the relation annotation step of MATRES.",
"\"Q1\" and \"Q2\" refer to the two questions crowdsourcers were asked (see Sec.",
"4.2 for details).",
"Line 1 measures the level of consistency between crowdsourcers and the authors and line 2 measures the level of consistency among the crowdsourcers themselves.",
"We continued to annotate INTENTION and OPINION which create orthogonal branches on the main axis.",
"In the first step, crowdsourcers achieved an accuracy on gold of .82 and a WAWA of .89.",
"Since only 16% of the events are in this category and these axes are usually very short (e.g., allocate funds to build a museum.",
"), the annotation task is relatively small and two experts took the second step and achieved an agreement of .86 (F 1 ).",
"We name our new dataset MATRES for Multi-Axis Temporal RElations for Start-points.",
"Each individual judgement cost us $0.01 and MATRES in total cost about $400 for 36 documents.",
"Comparison to TB-Dense To get another checkpoint of the quality of the new dataset, we compare with the annotations of TB-Dense.",
"TB-Dense has 1.1K verb events, between which 3.4K event-event (EE) relations are annotated.",
"In the new dataset, 72% of the events (0.8K) are anchored onto the main axis, resulting in 1.6K EE relations, and 16% (0.2K) are anchored onto orthogonal axes, resulting in 0.2K EE relations.",
"The following comparison is based on the 1.8K EE relations in common.",
"Moreover, since TB-Dense annotations are for intervals instead of start-points only, we converted TB-Dense's interval relations to start-point relations (e.g., if A includes B, then t A start is before t B start ).",
"The confusion matrix is shown in Table 5 .",
"A few remarks about how to understand it: First, when TB-Dense labels before or after, MATRES also has a high-probability of having the same label (b=455/513=.89, a=309/438=.71); when MATRES labels vague, TB-Dense is also very likely to label vague (v=192/312=.62).",
"This indicates the high agreement level between the two datasets if the interval-or point-based annotation difference is ruled out.",
"Second, many vague relations in TB-Dense are labeled as before, after or equal in MATRES.",
"This is expected because TB-Dense annotates relations between intervals, while MATRES annotates start-points.",
"When durative events are involved, the problem usually becomes more difficult and interval-based annotation is more likely to label vague (see earlier discussions in Sec.",
"3).",
"Example 7 shows three typical cases, where e14:became, e17:backed, e18:rose and e19:extending can be considered durative.",
"If only their start-points are considered, the crowdsourcers were correct in labeling e14 before e15, e16 after e17, and e18 equal to e19, although TB-Dense says vague for all of them.",
"Third, equal seems to be the relation that the two dataset mostly disagree on, which is probably due to crowdsourcers' lack of understanding in time granularity and event coreference.",
"Although equal relations only constitutes a small portion in all relations, it needs further investigation.",
"Baseline System We develop a baseline system for TempRel extraction on MATRES, assuming that all the events and axes are given.",
"The following commonly-Example 7: Typical cases that TB-Dense annotated vague but MATRES annotated before, after, and equal, respectively.",
"At one point , when it (e14:became) clear controllers could not contact the plane, someone (e15:said) a prayer.",
"TB-Dense: vague; MATRES: before The US is bolstering its military presence in the gulf, as President Clinton (e16:discussed) the Iraq crisis with the one ally who has (e17:backed) his threat of force, British prime minister Tony Blair.",
"TB-Dense: vague; MATRES: after Average hourly earnings of nonsupervisory employees (e18:rose) to $12.51.",
"The gain left wages 3.8 percent higher than a year earlier, (e19:extending) a trend that has given back to workers some of the earning power they lost to inflation in the last decade.",
"TB-Dense: vague; MATRES: equal used features for each event pair are used: (i) The part-of-speech (POS) tags of each individual event and of its neighboring three words.",
"(ii) The sentence and token distance between the two events.",
"(iii) The appearance of any modal verb between the two event mentions in text (i.e., will, would, can, could, may and might).",
"(iv) The appearance of any temporal connectives between the two event mentions (e.g., before, after and since).",
"(v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998) .",
"(vi) Whether the input event mentions have a common derivational form derived from WordNet.",
"(vii) The head words of the preposition phrases that cover each event, respectively.",
"And (viii) event properties such as Aspect, Modality, and Polarity that come with the TimeBank dataset and are commonly used as features.",
"The proposed baseline system uses the averaged perceptron algorithm to classify the relation between each event pair into one of the four relation types.",
"We adopted the same train/dev/test split of TB-Dense, where there are 22 documents in train, 5 in dev, and 9 in test.",
"Parameters were tuned on the train-set to maximize its F 1 on the dev-set, after which the classifier was retrained on the union of train and dev.",
"A detailed analysis of the baseline system is provided in Table 6 .",
"The performance on equal and vague is lower than on before and after, probably due to shortage in these labels in the training data and the inherent difficulty in event coreference and temporal vagueness.",
"We can see, though, that the overall performance on MATRES is much better than those in the literature for TempRel extraction, which used to be in the low 50's Ning et al., 2017 ).",
"The same system was also retrained and tested on the original annotations of TB-Dense (Line \"Original\"), which confirms the significant improvement if the proposed annotation scheme is used.",
"Note that we do not mean to say that the proposed baseline system itself is better than other existing algorithms, but rather that the proposed annotation scheme and the resulting dataset lead to better defined machine learning tasks.",
"In the future, more data can be collected and used with advanced techniques such as ILP (Do et al., 2012) , structured learning (Ning et al., 2017) or multi-sieve .",
"Table 6 : Performance of the proposed baseline system on MATRES.",
"Line \"Original\" is the same system retrained on the original TB-Dense and tested on the same subset of event pairs.",
"Due to the limited number of equal examples, the system did not make any equal predictions on the testset.",
"Conclusion This paper proposes a new scheme for TempRel annotation between events, simplifying the task by focusing on a single time axis at a time.",
"We have also identified that end-points of events is a major source of confusion during annotation due to reasons beyond the scope of TempRel annotation, and proposed to focus on start-points only and handle the end-points issue in further investigation (e.g., in event duration annotation tasks).",
"Pilot study by expert annotators shows significant IAA improvements compared to literature values, indicating a better task definition under the proposed scheme.",
"This further enables the usage of crowdsourcing to collect a new dataset, MATRES, at a lower time cost.",
"Analysis shows that MATRES, albeit crowdsourced, has achieved a reasonably good agreement level, as confirmed by its performance on the gold set (agreement with the authors), the WAWA metric (agreement with the crowdsourcers themselves), and consistency with TB-Dense (agreement with an existing dataset).",
"Given the fact that existing schemes suffer from low IAAs and lack of data, we hope that the findings in this work would provide a good start towards understanding more sophisticated semantic phenomena in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.3.1",
"2.3.2",
"2.3.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Temporal Structure of Events",
"Motivation",
"Multi-Axis Modeling",
"Comparisons with Existing Work",
"Axis Projection",
"Introduction of the Orthogonal Axes",
"Differences from Factuality",
"Interval Splitting",
"Ambiguity of End-Points",
"Annotation Scheme Design",
"Quality Control for Crowdsourcing",
"Vague Relations",
"Corpus Statistics and Quality",
"Comparison to TB-Dense",
"Baseline System",
"Conclusion"
]
} | GEM-SciDuet-train-123#paper-1336#slide-2 | Temporal relations a key component | Temporal Relation (TempRel): I turned off the lights and left.
Challenges faced by existing datasets/annotation schemes:
Low inter-annotator agreement (IAA)
Time consuming: Typically, 2-3 hours for a single document.
Our goal is to address these challenges,
And, understand the task of temporal relations better. | Temporal Relation (TempRel): I turned off the lights and left.
Challenges faced by existing datasets/annotation schemes:
Low inter-annotator agreement (IAA)
Time consuming: Typically, 2-3 hours for a single document.
Our goal is to address these challenges,
And, understand the task of temporal relations better. | [] |
GEM-SciDuet-train-123#paper-1336#slide-3 | 1336 | A Multi-Axis Annotation Scheme for Event Temporal Relations | Existing temporal relation (TempRel) annotation schemes often have low interannotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition. This paper proposes a new multi-axis modeling to better capture the temporal structure of events. In addition, we identify that event end-points are a major source of confusion in annotation, so we also propose to annotate TempRels based on start-points only. A pilot expert annotation effort using the proposed scheme shows significant improvement in IAA from the conventional 60's to 80's (Cohen's Kappa). This better-defined annotation scheme further enables the use of crowdsourcing to alleviate the labor intensity for each annotator. We hope that this work can foster more interesting studies towards event understanding. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259
],
"paper_content_text": [
"Introduction Temporal relation (TempRel) extraction is an important task for event understanding, and it has drawn much attention in the natural language processing (NLP) community recently (UzZaman et al., 2013; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015 Bethard et al., , 2016 Bethard et al., , 2017 Leeuwenberg and Moens, 2017; Ning et al., 2017 Ning et al., , 2018a .",
"Initiated by TimeBank (TB) (Pustejovsky et al., 2003b) , a number of TempRel datasets have been collected, including but not limited to the verbclause augmentation to TB (Bethard et al., 2007) , TempEval1-3 (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , TimeBank-Dense (TB-Dense) , EventTimeCorpus (Reimers et al., 2016) , and datasets with both temporal and other types of relations (e.g., coreference and causality) such as CaTeRs (Mostafazadeh et al., 2016) and RED (O'Gorman et al., 2016) .",
"These datasets were annotated by experts, but most still suffered from low inter-annotator agreements (IAA).",
"For instance, the IAAs of TB-Dense, RED and THYME-TimeML (Styler IV et al., 2014) were only below or near 60% (given that events are already annotated).",
"Since a low IAA usually indicates that the task is difficult even for humans (see Examples 1-3), the community has been looking into ways to simplify the task, by reducing the label set, and by breaking up the overall, complex task into subtasks (e.g., getting agreement on which event pairs should have a relation, and then what that relation should be) (Mostafazadeh et al., 2016; O'Gorman et al., 2016) .",
"In contrast to other existing datasets, Bethard et al.",
"(2007) achieved an agreement as high as 90%, but the scope of its annotation was narrowed down to a very special verb-clause structure.",
"(e1, e2), (e3, e4), and (e5, e6): TempRels that are difficult even for humans.",
"Note that only relevant events are highlighted here.",
"Example 1: Serbian police tried to eliminate the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Example 2: Service industries (e3:showed) solid job gains, as did manufacturers, two areas expected to be hardest (e4:hit) when the effects of the Asian crisis hit the American economy.",
"Example 3: We will act again if we have evidence he is (e5:rebuilding) his weapons of mass destruction capabilities, senior officials say.",
"In a bit of television diplomacy, Iraq's deputy foreign minister (e6:responded) from Baghdad in less than one hour, saying that .",
".",
".",
"This paper proposes a new approach to handling these issues in TempRel annotation.",
"First, we introduce multi-axis modeling to represent the temporal structure of events, based on which we anchor events to different semantic axes; only events from the same axis will then be temporally compared (Sec.",
"2).",
"As explained later, those event pairs in Examples 1-3 are difficult because they represent different semantic phenomena and belong to different axes.",
"Second, while we represent an event pair using two time intervals (say, [t 1 start , t 1 end ] and [t 2 start , t 2 end ]), we suggest that comparisons involving end-points (e.g., t 1 end vs. t 2 end ) are typically more difficult than comparing start-points (i.e., t 1 start vs. t 2 start ); we attribute this to the ambiguity of expressing and perceiving durations of events (Coll-Florit and Gennari, 2011) .",
"We believe that this is an important consideration, and we propose in Sec.",
"3 that TempRel annotation should focus on start-points.",
"Using the proposed annotation scheme, a pilot study done by experts achieved a high IAA of .84 (Cohen's Kappa) on a subset of TB-Dense, in contrast to the conventional 60's.",
"In addition to the low IAA issue, TempRel annotation is also known to be labor intensive.",
"Our third contribution is that we facilitate, for the first time, the use of crowdsourcing to collect a new, high quality (under multiple metrics explained later) TempRel dataset.",
"We explain how the crowdsourcing quality was controlled and how vague relations were handled in Sec.",
"4, and present some statistics and the quality of the new dataset in Sec.",
"5.",
"A baseline system is also shown to achieve much better performance on the new dataset, when compared with system performance in the literature (Sec.",
"6).",
"The paper's results are very encouraging and hopefully, this work would significantly benefit research in this area.",
"Temporal Structure of Events Given a set of events, one important question in designing the TempRel annotation task is: which pairs of events should have a relation?",
"The answer to it depends on the modeling of the overall temporal structure of events.",
"Motivation TimeBank (Pustejovsky et al., 2003b) laid the foundation for many later TempRel corpora, e.g., (Bethard et al., 2007; UzZaman et al., 2013; Cas-sidy et al., 2014) .",
"2 In TimeBank, the annotators were allowed to label TempRels between any pairs of events.",
"This setup models the overall structure of events using a general graph, which made annotators inadvertently overlook some pairs, resulting in low IAAs and many false negatives.",
"Example 4: Dense Annotation Scheme.",
"Serbian police (e7:tried) to (e8:eliminate) the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Given 4 NON-GENERIC events above, the dense scheme presents 6 pairs to annotators one by one: (e7, e8), (e7, e1), (e7, e2), (e8, e1), (e8, e2), and (e1, e2).",
"Apparently, not all pairs are well-defined, e.g., (e8, e2) and (e1, e2), but annotators are forced to label all of them.",
"To address this issue, proposed a dense annotation scheme, TB-Dense, which annotates all event pairs within a sliding, two-sentence window (see Example 4).",
"It requires all TempRels between GENERIC 3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NON-GENERIC and the other one for the GENERIC.",
"However, as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts.",
"It is reasonable to argue that the attempt to e1:restore order happened before the conflict where 51 people were e2:killed; or, 51 people had been killed but order had not been restored yet, so e1:restore is after e2:killed.",
"Similarly, in Example 2, service industries and manufacturers were originally expected to be hardest e4:hit but actually e3:showed gains, so e4:hit is before e3:showed; however, one can also argue that the two areas had showed gains but had not been hit, so e4:hit is after e3:showed.",
"Again, e5:rebuilding is a hypothetical event: \"we will act if rebuilding is true\".",
"Readers do not know for sure if \"he is already rebuilding weapons but we have no evidence\", or \"he will be building weapons in the future\", so annotators may disagree on the relation between e5:rebuilding and e6:responded.",
"Despite, importantly, minimizing missing annota-2 EventTimeCorpus (Reimers et al., 2016) is based on TimeBank, but aims at anchoring events onto explicit time expressions in each document rather than annotating TempRels between events, which can be a good complementary to other TempRel datasets.",
"3 For example, lions eat meat is GENERIC.",
"tions, the current dense scheme forces annotators to label many such ill-defined pairs, resulting in low IAA.",
"Multi-Axis Modeling Arguably, an ideal annotator may figure out the above ambiguity by him/herself and mark them as vague, but it is not a feasible requirement for all annotators to stay clear-headed for hours; let alone crowdsourcers.",
"What makes things worse is that, after annotators spend a long time figuring out these difficult cases, whether they disagree with each other or agree on the vagueness, the final decisions for such cases will still be vague.",
"As another way to handle this dilemma, TB-Dense resorted to a 80% confidence rule: annotators were allowed to choose a label if one is 80% sure that it was the writer's intent.",
"However, as pointed out by TB-Dense, annotators are likely to have rather different understandings of 80% confidence and it will still end up with disagreements.",
"In contrast to these annotation difficulties, humans can easily grasp the meaning of news articles, implying a potential gap between the difficulty of the annotation task and the one of understanding the actual meaning of the text.",
"In Examples 1-3, the writers did not intend to explain the TempRels between those pairs, and the original annotators of TimeBank 4 did not label relations between those pairs either, which indicates that both writers and readers did not think the TempRels between these pairs were crucial.",
"Instead, what is crucial in these examples is that \"Serbian police tried to restore order but killed 51 people\", that \"two areas were expected to be hit but showed gains\", and that \"if he rebuilds weapons then we will act.\"",
"To \"restore order\", to be \"hardest hit\", and \"if he was rebuilding\" were only the intention of police, the opinion of economists, and the condition to act, respectively, and whether or not they actually happen is not the focus of those writers.",
"This discussion suggests that a single axis is too restrictive to represent the complex structure of NON-GENERIC events.",
"Instead, we need a modeling which is more restrictive than a general graph so that annotators can focus on relation annotation (rather than looking for pairs first), but also more flexible than a single axis so that ill-defined 4 Recall that they were given the entire article and only salient relations would be annotated.",
"relations are not forcibly annotated.",
"Specifically, we need axes for intentions, opinions, hypotheses, etc.",
"in addition to the main axis of an article.",
"We thus argue for multi-axis modeling, as defined in Table 1 .",
"Following the proposed modeling, Examples 1-3 can be represented as in Fig.",
"1 .",
"This modeling aims at capturing what the author has explicitly expressed and it only asks annotators to look at comparable pairs, rather than forcing them to make decisions on often vaguely defined pairs.",
"In practice, we annotate one axis at a time: we first classify if an event is anchorable onto a given axis (this is also called the anchorability annotation step); then we annotate every pair of anchorable events (i.e., the relation annotation step); finally, we can move to another axis and repeat the two steps above.",
"Note that ruling out cross-axis relations is only a strategy we adopt in this paper to separate well-defined relations from ill-defined relations.",
"We do not claim that cross-axis relations are unimportant; instead, as shown in Fig.",
"2 , we think that cross-axis relations are a different semantic phenomenon that requires additional investigation.",
"Comparisons with Existing Work There have been other proposals of temporal structure modelings (Bramsen et al., 2006; Bethard et al., 2012) , but in general, the semantic phenomena handled in our work are very different and complementary to them.",
"(Bramsen et al., 2006) introduces \"temporal segments\" (a fragment of text that does not exhibit abrupt changes) in the medical domain.",
"Similarly, their temporal segments can also be considered as a special temporal structure modeling.",
"But a key difference is that (Bramsen et al., 2006) only annotates inter-segment relations, ignoring intra-segment ones.",
"Since those segments are usually large chunks of text, the semantics handled in (Bramsen et al., 2006) is in a very coarse granularity (as pointed out by (Bramsen et al., 2006) ) and is thus different from ours.",
"(Bethard et al., 2012) proposes a tree structure for children's stories, which \"typically have simpler temporal structures\", as they pointed out.",
"Moreover, in their annotation, an event can only be linked to a single nearby event, even if multiple nearby events may exist, whereas we do not have such restrictions.",
"In addition, some of the semantic phenomena in Table 1 have been discussed in existing work.",
"Here we compare with them for a better positioning of the proposed scheme.",
"Axis Projection TB-Dense handled the incomparability between main-axis events and HYPOTHESIS/NEGATION by treating an event as having occurred if the event is HYPOTHESIS/NEGATION.",
"5 In our multiaxis modeling, the strategy adopted by TB-Dense falls into a more general approach, \"axis projection\".",
"That is, projecting events across different axes to handle the incomparability between any two axes (not limited to HYPOTHE-SIS/NEGATION).",
"Axis projection works well for certain event pairs like Asian crisis and e4:hardest hit in Example 2: as in Fig.",
"1 , Asian crisis is before expected, which is again before e4:hardest hit, so Asian crisis is before e4:hardest hit.",
"Generally, however, since there is no direct evidence that can guide the projection, annotators may have different projections (imagine projecting e5:rebuilding onto the main axis: is it in the past or in the future?).",
"As a result, axis projec-tion requires many specially designed guidelines or strong external knowledge.",
"Annotators have to rigidly follow the sometimes counter-intuitive guidelines or \"guess\" a label instead of looking for evidence in the text.",
"When strong external knowledge is involved in axis projection, it becomes a reasoning process and the resulting relations are a different type.",
"For example, a reader may reason that in Example 3, it is well-known that they did \"act again\", implying his e5:rebuilding had happened and is before e6:responded.",
"Another example is in Fig.",
"2 .",
"It is obvious that relations based on these projections are not the same with and more challenging than those same-axis relations, so in the current stage, we should focus on same-axis relations only.",
"tended the conference, the projection of submit a paper onto the main axis is clearly before attended.",
"However, this projection requires strong external knowledge that a paper should be submitted before attending a conference.",
"Again, this projection is only a guess based on our external knowledge and it is still open whether the paper is submitted or not.",
"Introduction of the Orthogonal Axes Another prominent difference to earlier work is the introduction of orthogonal axes, which has not been used in any existing work as we know.",
"A special property is that the intersection event of two axes can be compared to events from both, which can sometimes bridge events, e.g., in Fig.",
"1 , Asian crisis is seemingly before hardest hit due to their connections to expected.",
"Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well.",
"However, the \"hardest hit\" in \"Asian crisis before hardest hit\" is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true.",
"Nevertheless, OPINIONS are not always true and INTENTIONS are not always fulfilled.",
"In Example 5, e9:sponsoring and e10:resolve are the opinions of the West and the speaker, respectively; whether or not they are true depends on the au-thors' implications or the readers' understandings, which is often beyond the scope of TempRel annotation.",
"6 Example 6 demonstrates a similar situation for INTENTIONS: when reading the sentence of e11:report, people are inclined to believe that it is fulfilled.",
"But if we read the sentence of e12:report, we have reason to believe that it is not.",
"When it comes to e13:tell, it is unclear if everyone told the truth.",
"The existence of such examples indicates that orthogonal axes are a better modeling for INTENTIONS and OPINIONS.",
"Example 5: Opinion events may not always be true.",
"He is ostracized by the West for (e9:sponsoring) terrorism.",
"We need to (e10:resolve) the deep-seated causes that have resulted in these problems.",
"Example 6: Intentions may not always be fulfilled.",
"A passerby called the police to (e11:report) the body.",
"A passerby called the police to (e12:report) the body.",
"Unfortunately, the line was busy.",
"I asked everyone to (e13:tell) the truth.",
"Differences from Factuality Event modality have been discussed in many existing event annotation schemes, e.g., Event Nugget (Mitamura et al., 2015) , Rich ERE , and RED.",
"Generally, an event is classified as Actual or Non-Actual, a.k.a.",
"factuality (Saurí and Pustejovsky, 2009; Lee et al., 2015) .",
"The main-axis events defined in this paper seem to be very similar to Actual events, but with several important differences: First, future events are Non-Actual because they indeed have not happened, but they may be on the main axis.",
"Second, events that are not on the main axis can also be Actual events, e.g., intentions that are fulfilled, or opinions that are true.",
"Third, as demonstrated by Examples 5-6, identifying anchorability as defined in Table 1 is relatively easy, but judging if an event actually happened is often a high-level understanding task that requires an understanding of the entire document or external knowledge.",
"Interested readers are referred to Appendix B for a detailed analysis of the difference between Anchorable (onto the main axis) and Actual on a subset of RED.",
"Interval Splitting All existing annotation schemes adopt the interval representation of events (Allen, 1984) and there are 13 relations between two intervals (for readers who are not familiar with it, please see Fig.",
"4 in the appendix).",
"To reduce the burden of annotators, existing schemes often resort to a reduced set of the 13 relations.",
"For instance, Verhagen et al.",
"(2007) Let [t 1 start , t 1 end ] and [t 2 start , t 2 end ] be the time intervals of two events (with the implicit assumption that t start t end ).",
"Instead of reducing the relations between two intervals, we try to explicitly compare the time points (see Fig.",
"3 ).",
"In this way, the label set is simply before, after and equal, 7 while the expressivity remains the same.",
"This interval splitting technique has also been used in (Raghavan et al., 2012) .",
"Figure 3 : The comparison of two event time intervals, [t 1 start , t 1 end ] and [t 2 start , t 2 end ], can be decomposed into four comparisons t 1 start vs. t 2 start , t 1 start vs. t 2 end , t 1 end vs. t 2 start , and t 1 end vs. t 2 end , without loss of generality.",
"[!",
"\"#$%# & , !",
"()* & ] [!",
"\"#$%# + , !",
"()* + ] time In addition to same expressivity, interval splitting can provide even more information when the relation between two events is vague.",
"In the conventional setting, imagine that the annotators find that the relation between two events can be either before or before and overlap.",
"Then the resulting annotation will have to be vague, although the annotators actually agree on the relation between t 1 start and t 2 start .",
"Using interval splitting, however, such information can be preserved.",
"An obvious downside of interval splitting is the increased number of annotations needed (4 point comparisons vs. 1 interval comparison).",
"In practice, however, it is usually much fewer than 4 comparisons.",
"For example, when we see t 1 end < t 2 start (as in Fig.",
"3) , the other three can be skipped because they can all be inferred.",
"Moreover, although the number of annotations is increased, the work load for human annotators may still be the same, because even in the conventional scheme, they still need to think of the relations between start-and end-points before they can make a decision.",
"Ambiguity of End-Points During our pilot annotation, the annotation quality dropped significantly when the annotators needed to reason about relations involving end-points of events.",
"Table 2 shows four metrics of task difficulty when only t 1 start vs. t 2 start or t 1 end vs. t 2 end are annotated.",
"Non-anchorable events were removed for both jobs.",
"The first two metrics, qualifying pass rate and survival rate are related to the two quality control protocols (see Sec.",
"4.1 for details).",
"We can see that when annotating the relations between end-points, only one out of ten crowdsourcers (11%) could successfully pass our qualifying test; and even if they had passed it, half of them (56%) would have been kicked out in the middle of the task.",
"The third line is the overall accuracy on gold set from all crowdsourcers (excluding those who did not pass the qualifying test), which drops from 67% to 37% when annotating end-end relations.",
"The last line is the average response time per annotation and we can see that it takes much longer to label an end-end TempRel (52s) than a start-start TempRel (33s).",
"This important discovery indicates that the TempRels between end-points is probably governed by a different linguistic phenomenon.",
"Table 2 : Annotations involving the end-points of events are found to be much harder than only comparing the start-points.",
"We hypothesize that the difficulty is a mixture of how durative events are expressed (by authors) and perceived (by readers) in natural language.",
"In cognitive psychology, Coll-Florit and Gennari (2011) discovered that human readers take longer to perceive durative events than punctual events, e.g., owe 50 bucks vs. lost 50 bucks.",
"From the writer's standpoint, durations are usually fuzzy (Schockaert and De Cock, 2008) , or assumed to be a prior knowledge of readers (e.g., college takes 4 years and watching an NBA game takes a few hours), and thus not always written explicitly.",
"Given all these reasons, we ignore the comparison of end-points in this work, although event duration is indeed, another important task.",
"Annotation Scheme Design To summarize, with the proposed multi-axis modeling (Sec.",
"2) and interval splitting (Sec.",
"3), our annotation scheme is two-step.",
"First, we mark every event candidate as being temporally Anchorable or not (based on the time axis we are working on).",
"Second, we adopt the dense annotation scheme to label TempRels only between Anchorable events.",
"Note that we only work on verb events in this paper, so non-verb event candidates are also deleted in a preprocessing step.",
"We design crowdsourcing tasks for both steps and as we show later, high crowdsourcing quality was achieved on both tasks.",
"In this section, we will discuss some practical issues.",
"Quality Control for Crowdsourcing We take advantage of the quality control feature in CrowdFlower in our crowdsourcing jobs.",
"For any job, a set of examples are annotated by experts beforehand, which is considered gold and will serve two purposes.",
"(i) Qualifying test: Any crowdsourcer who wants to work on this job has to pass with 70% accuracy on 10 questions randomly selected from the gold set.",
"(ii) Surviving test: During the annotation process, questions from the gold set will be randomly given to crowdsourcers without notice, and one has to maintain 70% accuracy on the gold set till the end of the annotation; otherwise, he or she will be forbidden from working on this job anymore and all his/her annotations will be discarded.",
"At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision.",
"Vague Relations How to handle vague relations is another issue in temporal annotation.",
"In non-dense schemes, annotators usually skip the annotation of a vague pair.",
"In dense schemes, a majority agreement rule is applied as a postprocessing step to back off a decision to vague when annotators cannot pass a majority vote , which reminds us that annotators often label a vague relation as non-vague due to lack of thinking.",
"We decide to proactively reduce the possibility of such situations.",
"As mentioned earlier, our label set for t 1 start vs. t 2 start is before, after, equal and vague.",
"We ask two questions: Q1=Is it possible that t 1 start is before t 2 start ?",
"Q2=Is it possible that t 2 start is before t 1 start ?",
"Let the an-swers be A1 and A2.",
"Then we have a oneto-one mapping as follows: A1=A2=yes7 !vague, A1=A2=no7 !equal, A1=yes, A2=no7 !before, and A1=no, A2=yes7 !after.",
"An advantage is that one will be prompted to think about all possibilities, thus reducing the chance of overlook.",
"Finally, the annotation interface we used is shown in Appendix C. Corpus Statistics and Quality In this section, we first focus on annotations on the main axis, which is usually the primary storyline and thus has most events.",
"Before launching the crowdsourcing tasks, we checked the IAA between two experts on a subset of TB-Dense (about 100 events and 400 relations).",
"A Cohen's Kappa of .85 was achieved in the first step: anchorability annotation.",
"Only those events that both experts labeled Anchorable were kept before they moved onto the second step: relation annotation, for which the Cohen's Kappa was .90 for Q1 and .87 for Q2.",
"Table 3 furthermore shows the distribution, Cohen's Kappa, and F 1 of each label.",
"We can see the Kappa and F 1 of vague (=.75, F 1 =.81) are generally lower than those of the other labels, confirming that temporal vagueness is a more difficult semantic phenomenon.",
"Nevertheless, the overall IAA shown in Table 3 is a significant improvement compared to existing datasets.",
"With the improved IAA confirmed by experts, we sequentially launched the two-step crowdsourcing tasks through CrowdFlower on top of the same 36 documents of TB-Dense.",
"To evaluate how well the crowdsourcers performed on our task, we calculate two quality metrics: accuracy on the gold set and the Worker Agreement with Aggregate (WAWA).",
"WAWA indicates the average number of crowdsourcers' responses agreed with the aggregate answer (we used majority aggregation for each question).",
"For example, if N individual responses were obtained in total, and n of them were correct when compared to the aggregate answer, then WAWA is simply n/N .",
"In the first step, crowdsourcers labeled 28% of the events as Non-Anchorable to the main axis, with an accuracy on the gold of .86 and a WAWA of .79.",
"With Non-Anchorable events filtered, the relation annotation step was launched as another crowdsourcing task.",
"The label distribution is b=.",
"50, a=.28, e=.03, and v=.19 (consistent with Table 3 ).",
"In Table 4 , we show the annotation quality of this step using accuracy on the gold set and WAWA.",
"We can see that the crowdsourcers achieved a very good performance on the gold set, indicating that they are consistent with the authors who created the gold set; these crowdsourcers also achieved a high-level agreement under the WAWA metric, indicating that they are consistent among themselves.",
"These two metrics indicate that the annotation task is now well-defined and easy to understand even by non-experts.",
"Table 4 : Quality analysis of the relation annotation step of MATRES.",
"\"Q1\" and \"Q2\" refer to the two questions crowdsourcers were asked (see Sec.",
"4.2 for details).",
"Line 1 measures the level of consistency between crowdsourcers and the authors and line 2 measures the level of consistency among the crowdsourcers themselves.",
"We continued to annotate INTENTION and OPINION which create orthogonal branches on the main axis.",
"In the first step, crowdsourcers achieved an accuracy on gold of .82 and a WAWA of .89.",
"Since only 16% of the events are in this category and these axes are usually very short (e.g., allocate funds to build a museum.",
"), the annotation task is relatively small and two experts took the second step and achieved an agreement of .86 (F 1 ).",
"We name our new dataset MATRES for Multi-Axis Temporal RElations for Start-points.",
"Each individual judgement cost us $0.01 and MATRES in total cost about $400 for 36 documents.",
"Comparison to TB-Dense To get another checkpoint of the quality of the new dataset, we compare with the annotations of TB-Dense.",
"TB-Dense has 1.1K verb events, between which 3.4K event-event (EE) relations are annotated.",
"In the new dataset, 72% of the events (0.8K) are anchored onto the main axis, resulting in 1.6K EE relations, and 16% (0.2K) are anchored onto orthogonal axes, resulting in 0.2K EE relations.",
"The following comparison is based on the 1.8K EE relations in common.",
"Moreover, since TB-Dense annotations are for intervals instead of start-points only, we converted TB-Dense's interval relations to start-point relations (e.g., if A includes B, then t A start is before t B start ).",
"The confusion matrix is shown in Table 5 .",
"A few remarks about how to understand it: First, when TB-Dense labels before or after, MATRES also has a high-probability of having the same label (b=455/513=.89, a=309/438=.71); when MATRES labels vague, TB-Dense is also very likely to label vague (v=192/312=.62).",
"This indicates the high agreement level between the two datasets if the interval-or point-based annotation difference is ruled out.",
"Second, many vague relations in TB-Dense are labeled as before, after or equal in MATRES.",
"This is expected because TB-Dense annotates relations between intervals, while MATRES annotates start-points.",
"When durative events are involved, the problem usually becomes more difficult and interval-based annotation is more likely to label vague (see earlier discussions in Sec.",
"3).",
"Example 7 shows three typical cases, where e14:became, e17:backed, e18:rose and e19:extending can be considered durative.",
"If only their start-points are considered, the crowdsourcers were correct in labeling e14 before e15, e16 after e17, and e18 equal to e19, although TB-Dense says vague for all of them.",
"Third, equal seems to be the relation that the two dataset mostly disagree on, which is probably due to crowdsourcers' lack of understanding in time granularity and event coreference.",
"Although equal relations only constitutes a small portion in all relations, it needs further investigation.",
"Baseline System We develop a baseline system for TempRel extraction on MATRES, assuming that all the events and axes are given.",
"The following commonly-Example 7: Typical cases that TB-Dense annotated vague but MATRES annotated before, after, and equal, respectively.",
"At one point , when it (e14:became) clear controllers could not contact the plane, someone (e15:said) a prayer.",
"TB-Dense: vague; MATRES: before The US is bolstering its military presence in the gulf, as President Clinton (e16:discussed) the Iraq crisis with the one ally who has (e17:backed) his threat of force, British prime minister Tony Blair.",
"TB-Dense: vague; MATRES: after Average hourly earnings of nonsupervisory employees (e18:rose) to $12.51.",
"The gain left wages 3.8 percent higher than a year earlier, (e19:extending) a trend that has given back to workers some of the earning power they lost to inflation in the last decade.",
"TB-Dense: vague; MATRES: equal used features for each event pair are used: (i) The part-of-speech (POS) tags of each individual event and of its neighboring three words.",
"(ii) The sentence and token distance between the two events.",
"(iii) The appearance of any modal verb between the two event mentions in text (i.e., will, would, can, could, may and might).",
"(iv) The appearance of any temporal connectives between the two event mentions (e.g., before, after and since).",
"(v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998) .",
"(vi) Whether the input event mentions have a common derivational form derived from WordNet.",
"(vii) The head words of the preposition phrases that cover each event, respectively.",
"And (viii) event properties such as Aspect, Modality, and Polarity that come with the TimeBank dataset and are commonly used as features.",
"The proposed baseline system uses the averaged perceptron algorithm to classify the relation between each event pair into one of the four relation types.",
"We adopted the same train/dev/test split of TB-Dense, where there are 22 documents in train, 5 in dev, and 9 in test.",
"Parameters were tuned on the train-set to maximize its F 1 on the dev-set, after which the classifier was retrained on the union of train and dev.",
"A detailed analysis of the baseline system is provided in Table 6 .",
"The performance on equal and vague is lower than on before and after, probably due to shortage in these labels in the training data and the inherent difficulty in event coreference and temporal vagueness.",
"We can see, though, that the overall performance on MATRES is much better than those in the literature for TempRel extraction, which used to be in the low 50's Ning et al., 2017 ).",
"The same system was also retrained and tested on the original annotations of TB-Dense (Line \"Original\"), which confirms the significant improvement if the proposed annotation scheme is used.",
"Note that we do not mean to say that the proposed baseline system itself is better than other existing algorithms, but rather that the proposed annotation scheme and the resulting dataset lead to better defined machine learning tasks.",
"In the future, more data can be collected and used with advanced techniques such as ILP (Do et al., 2012) , structured learning (Ning et al., 2017) or multi-sieve .",
"Table 6 : Performance of the proposed baseline system on MATRES.",
"Line \"Original\" is the same system retrained on the original TB-Dense and tested on the same subset of event pairs.",
"Due to the limited number of equal examples, the system did not make any equal predictions on the testset.",
"Conclusion This paper proposes a new scheme for TempRel annotation between events, simplifying the task by focusing on a single time axis at a time.",
"We have also identified that end-points of events is a major source of confusion during annotation due to reasons beyond the scope of TempRel annotation, and proposed to focus on start-points only and handle the end-points issue in further investigation (e.g., in event duration annotation tasks).",
"Pilot study by expert annotators shows significant IAA improvements compared to literature values, indicating a better task definition under the proposed scheme.",
"This further enables the usage of crowdsourcing to collect a new dataset, MATRES, at a lower time cost.",
"Analysis shows that MATRES, albeit crowdsourced, has achieved a reasonably good agreement level, as confirmed by its performance on the gold set (agreement with the authors), the WAWA metric (agreement with the crowdsourcers themselves), and consistency with TB-Dense (agreement with an existing dataset).",
"Given the fact that existing schemes suffer from low IAAs and lack of data, we hope that the findings in this work would provide a good start towards understanding more sophisticated semantic phenomena in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.3.1",
"2.3.2",
"2.3.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Temporal Structure of Events",
"Motivation",
"Multi-Axis Modeling",
"Comparisons with Existing Work",
"Axis Projection",
"Introduction of the Orthogonal Axes",
"Differences from Factuality",
"Interval Splitting",
"Ambiguity of End-Points",
"Annotation Scheme Design",
"Quality Control for Crowdsourcing",
"Vague Relations",
"Corpus Statistics and Quality",
"Comparison to TB-Dense",
"Baseline System",
"Conclusion"
]
} | GEM-SciDuet-train-123#paper-1336#slide-3 | Highlights and outline | 276 docs: Annotated the 276 documents from TempEval3
1 week: Finished in about one week (using crowdsourcing)
IAA improved from literatures 60% to 80%
Re-thinking identifying temporal relations between events
Results in re-defining the temporal relations task, and the corresponding annotation scheme, in order to make it feasible
Outline of our approach (3 components)
Multi-axis: types of events and their temporal structure
Start & End points: end-points are a source of confusion/ambiguity
Crowdsourcing: collect data more easily while maintaining a good quality | 276 docs: Annotated the 276 documents from TempEval3
1 week: Finished in about one week (using crowdsourcing)
IAA improved from literatures 60% to 80%
Re-thinking identifying temporal relations between events
Results in re-defining the temporal relations task, and the corresponding annotation scheme, in order to make it feasible
Outline of our approach (3 components)
Multi-axis: types of events and their temporal structure
Start & End points: end-points are a source of confusion/ambiguity
Crowdsourcing: collect data more easily while maintaining a good quality | [] |
GEM-SciDuet-train-123#paper-1336#slide-4 | 1336 | A Multi-Axis Annotation Scheme for Event Temporal Relations | Existing temporal relation (TempRel) annotation schemes often have low interannotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition. This paper proposes a new multi-axis modeling to better capture the temporal structure of events. In addition, we identify that event end-points are a major source of confusion in annotation, so we also propose to annotate TempRels based on start-points only. A pilot expert annotation effort using the proposed scheme shows significant improvement in IAA from the conventional 60's to 80's (Cohen's Kappa). This better-defined annotation scheme further enables the use of crowdsourcing to alleviate the labor intensity for each annotator. We hope that this work can foster more interesting studies towards event understanding. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259
],
"paper_content_text": [
"Introduction Temporal relation (TempRel) extraction is an important task for event understanding, and it has drawn much attention in the natural language processing (NLP) community recently (UzZaman et al., 2013; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015 Bethard et al., , 2016 Bethard et al., , 2017 Leeuwenberg and Moens, 2017; Ning et al., 2017 Ning et al., , 2018a .",
"Initiated by TimeBank (TB) (Pustejovsky et al., 2003b) , a number of TempRel datasets have been collected, including but not limited to the verbclause augmentation to TB (Bethard et al., 2007) , TempEval1-3 (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , TimeBank-Dense (TB-Dense) , EventTimeCorpus (Reimers et al., 2016) , and datasets with both temporal and other types of relations (e.g., coreference and causality) such as CaTeRs (Mostafazadeh et al., 2016) and RED (O'Gorman et al., 2016) .",
"These datasets were annotated by experts, but most still suffered from low inter-annotator agreements (IAA).",
"For instance, the IAAs of TB-Dense, RED and THYME-TimeML (Styler IV et al., 2014) were only below or near 60% (given that events are already annotated).",
"Since a low IAA usually indicates that the task is difficult even for humans (see Examples 1-3), the community has been looking into ways to simplify the task, by reducing the label set, and by breaking up the overall, complex task into subtasks (e.g., getting agreement on which event pairs should have a relation, and then what that relation should be) (Mostafazadeh et al., 2016; O'Gorman et al., 2016) .",
"In contrast to other existing datasets, Bethard et al.",
"(2007) achieved an agreement as high as 90%, but the scope of its annotation was narrowed down to a very special verb-clause structure.",
"(e1, e2), (e3, e4), and (e5, e6): TempRels that are difficult even for humans.",
"Note that only relevant events are highlighted here.",
"Example 1: Serbian police tried to eliminate the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Example 2: Service industries (e3:showed) solid job gains, as did manufacturers, two areas expected to be hardest (e4:hit) when the effects of the Asian crisis hit the American economy.",
"Example 3: We will act again if we have evidence he is (e5:rebuilding) his weapons of mass destruction capabilities, senior officials say.",
"In a bit of television diplomacy, Iraq's deputy foreign minister (e6:responded) from Baghdad in less than one hour, saying that .",
".",
".",
"This paper proposes a new approach to handling these issues in TempRel annotation.",
"First, we introduce multi-axis modeling to represent the temporal structure of events, based on which we anchor events to different semantic axes; only events from the same axis will then be temporally compared (Sec.",
"2).",
"As explained later, those event pairs in Examples 1-3 are difficult because they represent different semantic phenomena and belong to different axes.",
"Second, while we represent an event pair using two time intervals (say, [t 1 start , t 1 end ] and [t 2 start , t 2 end ]), we suggest that comparisons involving end-points (e.g., t 1 end vs. t 2 end ) are typically more difficult than comparing start-points (i.e., t 1 start vs. t 2 start ); we attribute this to the ambiguity of expressing and perceiving durations of events (Coll-Florit and Gennari, 2011) .",
"We believe that this is an important consideration, and we propose in Sec.",
"3 that TempRel annotation should focus on start-points.",
"Using the proposed annotation scheme, a pilot study done by experts achieved a high IAA of .84 (Cohen's Kappa) on a subset of TB-Dense, in contrast to the conventional 60's.",
"In addition to the low IAA issue, TempRel annotation is also known to be labor intensive.",
"Our third contribution is that we facilitate, for the first time, the use of crowdsourcing to collect a new, high quality (under multiple metrics explained later) TempRel dataset.",
"We explain how the crowdsourcing quality was controlled and how vague relations were handled in Sec.",
"4, and present some statistics and the quality of the new dataset in Sec.",
"5.",
"A baseline system is also shown to achieve much better performance on the new dataset, when compared with system performance in the literature (Sec.",
"6).",
"The paper's results are very encouraging and hopefully, this work would significantly benefit research in this area.",
"Temporal Structure of Events Given a set of events, one important question in designing the TempRel annotation task is: which pairs of events should have a relation?",
"The answer to it depends on the modeling of the overall temporal structure of events.",
"Motivation TimeBank (Pustejovsky et al., 2003b) laid the foundation for many later TempRel corpora, e.g., (Bethard et al., 2007; UzZaman et al., 2013; Cas-sidy et al., 2014) .",
"2 In TimeBank, the annotators were allowed to label TempRels between any pairs of events.",
"This setup models the overall structure of events using a general graph, which made annotators inadvertently overlook some pairs, resulting in low IAAs and many false negatives.",
"Example 4: Dense Annotation Scheme.",
"Serbian police (e7:tried) to (e8:eliminate) the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Given 4 NON-GENERIC events above, the dense scheme presents 6 pairs to annotators one by one: (e7, e8), (e7, e1), (e7, e2), (e8, e1), (e8, e2), and (e1, e2).",
"Apparently, not all pairs are well-defined, e.g., (e8, e2) and (e1, e2), but annotators are forced to label all of them.",
"To address this issue, proposed a dense annotation scheme, TB-Dense, which annotates all event pairs within a sliding, two-sentence window (see Example 4).",
"It requires all TempRels between GENERIC 3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NON-GENERIC and the other one for the GENERIC.",
"However, as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts.",
"It is reasonable to argue that the attempt to e1:restore order happened before the conflict where 51 people were e2:killed; or, 51 people had been killed but order had not been restored yet, so e1:restore is after e2:killed.",
"Similarly, in Example 2, service industries and manufacturers were originally expected to be hardest e4:hit but actually e3:showed gains, so e4:hit is before e3:showed; however, one can also argue that the two areas had showed gains but had not been hit, so e4:hit is after e3:showed.",
"Again, e5:rebuilding is a hypothetical event: \"we will act if rebuilding is true\".",
"Readers do not know for sure if \"he is already rebuilding weapons but we have no evidence\", or \"he will be building weapons in the future\", so annotators may disagree on the relation between e5:rebuilding and e6:responded.",
"Despite, importantly, minimizing missing annota-2 EventTimeCorpus (Reimers et al., 2016) is based on TimeBank, but aims at anchoring events onto explicit time expressions in each document rather than annotating TempRels between events, which can be a good complementary to other TempRel datasets.",
"3 For example, lions eat meat is GENERIC.",
"tions, the current dense scheme forces annotators to label many such ill-defined pairs, resulting in low IAA.",
"Multi-Axis Modeling Arguably, an ideal annotator may figure out the above ambiguity by him/herself and mark them as vague, but it is not a feasible requirement for all annotators to stay clear-headed for hours; let alone crowdsourcers.",
"What makes things worse is that, after annotators spend a long time figuring out these difficult cases, whether they disagree with each other or agree on the vagueness, the final decisions for such cases will still be vague.",
"As another way to handle this dilemma, TB-Dense resorted to a 80% confidence rule: annotators were allowed to choose a label if one is 80% sure that it was the writer's intent.",
"However, as pointed out by TB-Dense, annotators are likely to have rather different understandings of 80% confidence and it will still end up with disagreements.",
"In contrast to these annotation difficulties, humans can easily grasp the meaning of news articles, implying a potential gap between the difficulty of the annotation task and the one of understanding the actual meaning of the text.",
"In Examples 1-3, the writers did not intend to explain the TempRels between those pairs, and the original annotators of TimeBank 4 did not label relations between those pairs either, which indicates that both writers and readers did not think the TempRels between these pairs were crucial.",
"Instead, what is crucial in these examples is that \"Serbian police tried to restore order but killed 51 people\", that \"two areas were expected to be hit but showed gains\", and that \"if he rebuilds weapons then we will act.\"",
"To \"restore order\", to be \"hardest hit\", and \"if he was rebuilding\" were only the intention of police, the opinion of economists, and the condition to act, respectively, and whether or not they actually happen is not the focus of those writers.",
"This discussion suggests that a single axis is too restrictive to represent the complex structure of NON-GENERIC events.",
"Instead, we need a modeling which is more restrictive than a general graph so that annotators can focus on relation annotation (rather than looking for pairs first), but also more flexible than a single axis so that ill-defined 4 Recall that they were given the entire article and only salient relations would be annotated.",
"relations are not forcibly annotated.",
"Specifically, we need axes for intentions, opinions, hypotheses, etc.",
"in addition to the main axis of an article.",
"We thus argue for multi-axis modeling, as defined in Table 1 .",
"Following the proposed modeling, Examples 1-3 can be represented as in Fig.",
"1 .",
"This modeling aims at capturing what the author has explicitly expressed and it only asks annotators to look at comparable pairs, rather than forcing them to make decisions on often vaguely defined pairs.",
"In practice, we annotate one axis at a time: we first classify if an event is anchorable onto a given axis (this is also called the anchorability annotation step); then we annotate every pair of anchorable events (i.e., the relation annotation step); finally, we can move to another axis and repeat the two steps above.",
"Note that ruling out cross-axis relations is only a strategy we adopt in this paper to separate well-defined relations from ill-defined relations.",
"We do not claim that cross-axis relations are unimportant; instead, as shown in Fig.",
"2 , we think that cross-axis relations are a different semantic phenomenon that requires additional investigation.",
"Comparisons with Existing Work There have been other proposals of temporal structure modelings (Bramsen et al., 2006; Bethard et al., 2012) , but in general, the semantic phenomena handled in our work are very different and complementary to them.",
"(Bramsen et al., 2006) introduces \"temporal segments\" (a fragment of text that does not exhibit abrupt changes) in the medical domain.",
"Similarly, their temporal segments can also be considered as a special temporal structure modeling.",
"But a key difference is that (Bramsen et al., 2006) only annotates inter-segment relations, ignoring intra-segment ones.",
"Since those segments are usually large chunks of text, the semantics handled in (Bramsen et al., 2006) is in a very coarse granularity (as pointed out by (Bramsen et al., 2006) ) and is thus different from ours.",
"(Bethard et al., 2012) proposes a tree structure for children's stories, which \"typically have simpler temporal structures\", as they pointed out.",
"Moreover, in their annotation, an event can only be linked to a single nearby event, even if multiple nearby events may exist, whereas we do not have such restrictions.",
"In addition, some of the semantic phenomena in Table 1 have been discussed in existing work.",
"Here we compare with them for a better positioning of the proposed scheme.",
"Axis Projection TB-Dense handled the incomparability between main-axis events and HYPOTHESIS/NEGATION by treating an event as having occurred if the event is HYPOTHESIS/NEGATION.",
"5 In our multiaxis modeling, the strategy adopted by TB-Dense falls into a more general approach, \"axis projection\".",
"That is, projecting events across different axes to handle the incomparability between any two axes (not limited to HYPOTHE-SIS/NEGATION).",
"Axis projection works well for certain event pairs like Asian crisis and e4:hardest hit in Example 2: as in Fig.",
"1 , Asian crisis is before expected, which is again before e4:hardest hit, so Asian crisis is before e4:hardest hit.",
"Generally, however, since there is no direct evidence that can guide the projection, annotators may have different projections (imagine projecting e5:rebuilding onto the main axis: is it in the past or in the future?).",
"As a result, axis projec-tion requires many specially designed guidelines or strong external knowledge.",
"Annotators have to rigidly follow the sometimes counter-intuitive guidelines or \"guess\" a label instead of looking for evidence in the text.",
"When strong external knowledge is involved in axis projection, it becomes a reasoning process and the resulting relations are a different type.",
"For example, a reader may reason that in Example 3, it is well-known that they did \"act again\", implying his e5:rebuilding had happened and is before e6:responded.",
"Another example is in Fig.",
"2 .",
"It is obvious that relations based on these projections are not the same with and more challenging than those same-axis relations, so in the current stage, we should focus on same-axis relations only.",
"tended the conference, the projection of submit a paper onto the main axis is clearly before attended.",
"However, this projection requires strong external knowledge that a paper should be submitted before attending a conference.",
"Again, this projection is only a guess based on our external knowledge and it is still open whether the paper is submitted or not.",
"Introduction of the Orthogonal Axes Another prominent difference to earlier work is the introduction of orthogonal axes, which has not been used in any existing work as we know.",
"A special property is that the intersection event of two axes can be compared to events from both, which can sometimes bridge events, e.g., in Fig.",
"1 , Asian crisis is seemingly before hardest hit due to their connections to expected.",
"Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well.",
"However, the \"hardest hit\" in \"Asian crisis before hardest hit\" is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true.",
"Nevertheless, OPINIONS are not always true and INTENTIONS are not always fulfilled.",
"In Example 5, e9:sponsoring and e10:resolve are the opinions of the West and the speaker, respectively; whether or not they are true depends on the au-thors' implications or the readers' understandings, which is often beyond the scope of TempRel annotation.",
"6 Example 6 demonstrates a similar situation for INTENTIONS: when reading the sentence of e11:report, people are inclined to believe that it is fulfilled.",
"But if we read the sentence of e12:report, we have reason to believe that it is not.",
"When it comes to e13:tell, it is unclear if everyone told the truth.",
"The existence of such examples indicates that orthogonal axes are a better modeling for INTENTIONS and OPINIONS.",
"Example 5: Opinion events may not always be true.",
"He is ostracized by the West for (e9:sponsoring) terrorism.",
"We need to (e10:resolve) the deep-seated causes that have resulted in these problems.",
"Example 6: Intentions may not always be fulfilled.",
"A passerby called the police to (e11:report) the body.",
"A passerby called the police to (e12:report) the body.",
"Unfortunately, the line was busy.",
"I asked everyone to (e13:tell) the truth.",
"Differences from Factuality Event modality have been discussed in many existing event annotation schemes, e.g., Event Nugget (Mitamura et al., 2015) , Rich ERE , and RED.",
"Generally, an event is classified as Actual or Non-Actual, a.k.a.",
"factuality (Saurí and Pustejovsky, 2009; Lee et al., 2015) .",
"The main-axis events defined in this paper seem to be very similar to Actual events, but with several important differences: First, future events are Non-Actual because they indeed have not happened, but they may be on the main axis.",
"Second, events that are not on the main axis can also be Actual events, e.g., intentions that are fulfilled, or opinions that are true.",
"Third, as demonstrated by Examples 5-6, identifying anchorability as defined in Table 1 is relatively easy, but judging if an event actually happened is often a high-level understanding task that requires an understanding of the entire document or external knowledge.",
"Interested readers are referred to Appendix B for a detailed analysis of the difference between Anchorable (onto the main axis) and Actual on a subset of RED.",
"Interval Splitting All existing annotation schemes adopt the interval representation of events (Allen, 1984) and there are 13 relations between two intervals (for readers who are not familiar with it, please see Fig.",
"4 in the appendix).",
"To reduce the burden of annotators, existing schemes often resort to a reduced set of the 13 relations.",
"For instance, Verhagen et al.",
"(2007) Let [t 1 start , t 1 end ] and [t 2 start , t 2 end ] be the time intervals of two events (with the implicit assumption that t start t end ).",
"Instead of reducing the relations between two intervals, we try to explicitly compare the time points (see Fig.",
"3 ).",
"In this way, the label set is simply before, after and equal, 7 while the expressivity remains the same.",
"This interval splitting technique has also been used in (Raghavan et al., 2012) .",
"Figure 3 : The comparison of two event time intervals, [t 1 start , t 1 end ] and [t 2 start , t 2 end ], can be decomposed into four comparisons t 1 start vs. t 2 start , t 1 start vs. t 2 end , t 1 end vs. t 2 start , and t 1 end vs. t 2 end , without loss of generality.",
"[!",
"\"#$%# & , !",
"()* & ] [!",
"\"#$%# + , !",
"()* + ] time In addition to same expressivity, interval splitting can provide even more information when the relation between two events is vague.",
"In the conventional setting, imagine that the annotators find that the relation between two events can be either before or before and overlap.",
"Then the resulting annotation will have to be vague, although the annotators actually agree on the relation between t 1 start and t 2 start .",
"Using interval splitting, however, such information can be preserved.",
"An obvious downside of interval splitting is the increased number of annotations needed (4 point comparisons vs. 1 interval comparison).",
"In practice, however, it is usually much fewer than 4 comparisons.",
"For example, when we see t 1 end < t 2 start (as in Fig.",
"3) , the other three can be skipped because they can all be inferred.",
"Moreover, although the number of annotations is increased, the work load for human annotators may still be the same, because even in the conventional scheme, they still need to think of the relations between start-and end-points before they can make a decision.",
"Ambiguity of End-Points During our pilot annotation, the annotation quality dropped significantly when the annotators needed to reason about relations involving end-points of events.",
"Table 2 shows four metrics of task difficulty when only t 1 start vs. t 2 start or t 1 end vs. t 2 end are annotated.",
"Non-anchorable events were removed for both jobs.",
"The first two metrics, qualifying pass rate and survival rate are related to the two quality control protocols (see Sec.",
"4.1 for details).",
"We can see that when annotating the relations between end-points, only one out of ten crowdsourcers (11%) could successfully pass our qualifying test; and even if they had passed it, half of them (56%) would have been kicked out in the middle of the task.",
"The third line is the overall accuracy on gold set from all crowdsourcers (excluding those who did not pass the qualifying test), which drops from 67% to 37% when annotating end-end relations.",
"The last line is the average response time per annotation and we can see that it takes much longer to label an end-end TempRel (52s) than a start-start TempRel (33s).",
"This important discovery indicates that the TempRels between end-points is probably governed by a different linguistic phenomenon.",
"Table 2 : Annotations involving the end-points of events are found to be much harder than only comparing the start-points.",
"We hypothesize that the difficulty is a mixture of how durative events are expressed (by authors) and perceived (by readers) in natural language.",
"In cognitive psychology, Coll-Florit and Gennari (2011) discovered that human readers take longer to perceive durative events than punctual events, e.g., owe 50 bucks vs. lost 50 bucks.",
"From the writer's standpoint, durations are usually fuzzy (Schockaert and De Cock, 2008) , or assumed to be a prior knowledge of readers (e.g., college takes 4 years and watching an NBA game takes a few hours), and thus not always written explicitly.",
"Given all these reasons, we ignore the comparison of end-points in this work, although event duration is indeed, another important task.",
"Annotation Scheme Design To summarize, with the proposed multi-axis modeling (Sec.",
"2) and interval splitting (Sec.",
"3), our annotation scheme is two-step.",
"First, we mark every event candidate as being temporally Anchorable or not (based on the time axis we are working on).",
"Second, we adopt the dense annotation scheme to label TempRels only between Anchorable events.",
"Note that we only work on verb events in this paper, so non-verb event candidates are also deleted in a preprocessing step.",
"We design crowdsourcing tasks for both steps and as we show later, high crowdsourcing quality was achieved on both tasks.",
"In this section, we will discuss some practical issues.",
"Quality Control for Crowdsourcing We take advantage of the quality control feature in CrowdFlower in our crowdsourcing jobs.",
"For any job, a set of examples are annotated by experts beforehand, which is considered gold and will serve two purposes.",
"(i) Qualifying test: Any crowdsourcer who wants to work on this job has to pass with 70% accuracy on 10 questions randomly selected from the gold set.",
"(ii) Surviving test: During the annotation process, questions from the gold set will be randomly given to crowdsourcers without notice, and one has to maintain 70% accuracy on the gold set till the end of the annotation; otherwise, he or she will be forbidden from working on this job anymore and all his/her annotations will be discarded.",
"At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision.",
"Vague Relations How to handle vague relations is another issue in temporal annotation.",
"In non-dense schemes, annotators usually skip the annotation of a vague pair.",
"In dense schemes, a majority agreement rule is applied as a postprocessing step to back off a decision to vague when annotators cannot pass a majority vote , which reminds us that annotators often label a vague relation as non-vague due to lack of thinking.",
"We decide to proactively reduce the possibility of such situations.",
"As mentioned earlier, our label set for t 1 start vs. t 2 start is before, after, equal and vague.",
"We ask two questions: Q1=Is it possible that t 1 start is before t 2 start ?",
"Q2=Is it possible that t 2 start is before t 1 start ?",
"Let the an-swers be A1 and A2.",
"Then we have a oneto-one mapping as follows: A1=A2=yes7 !vague, A1=A2=no7 !equal, A1=yes, A2=no7 !before, and A1=no, A2=yes7 !after.",
"An advantage is that one will be prompted to think about all possibilities, thus reducing the chance of overlook.",
"Finally, the annotation interface we used is shown in Appendix C. Corpus Statistics and Quality In this section, we first focus on annotations on the main axis, which is usually the primary storyline and thus has most events.",
"Before launching the crowdsourcing tasks, we checked the IAA between two experts on a subset of TB-Dense (about 100 events and 400 relations).",
"A Cohen's Kappa of .85 was achieved in the first step: anchorability annotation.",
"Only those events that both experts labeled Anchorable were kept before they moved onto the second step: relation annotation, for which the Cohen's Kappa was .90 for Q1 and .87 for Q2.",
"Table 3 furthermore shows the distribution, Cohen's Kappa, and F 1 of each label.",
"We can see the Kappa and F 1 of vague (=.75, F 1 =.81) are generally lower than those of the other labels, confirming that temporal vagueness is a more difficult semantic phenomenon.",
"Nevertheless, the overall IAA shown in Table 3 is a significant improvement compared to existing datasets.",
"With the improved IAA confirmed by experts, we sequentially launched the two-step crowdsourcing tasks through CrowdFlower on top of the same 36 documents of TB-Dense.",
"To evaluate how well the crowdsourcers performed on our task, we calculate two quality metrics: accuracy on the gold set and the Worker Agreement with Aggregate (WAWA).",
"WAWA indicates the average number of crowdsourcers' responses agreed with the aggregate answer (we used majority aggregation for each question).",
"For example, if N individual responses were obtained in total, and n of them were correct when compared to the aggregate answer, then WAWA is simply n/N .",
"In the first step, crowdsourcers labeled 28% of the events as Non-Anchorable to the main axis, with an accuracy on the gold of .86 and a WAWA of .79.",
"With Non-Anchorable events filtered, the relation annotation step was launched as another crowdsourcing task.",
"The label distribution is b=.",
"50, a=.28, e=.03, and v=.19 (consistent with Table 3 ).",
"In Table 4 , we show the annotation quality of this step using accuracy on the gold set and WAWA.",
"We can see that the crowdsourcers achieved a very good performance on the gold set, indicating that they are consistent with the authors who created the gold set; these crowdsourcers also achieved a high-level agreement under the WAWA metric, indicating that they are consistent among themselves.",
"These two metrics indicate that the annotation task is now well-defined and easy to understand even by non-experts.",
"Table 4 : Quality analysis of the relation annotation step of MATRES.",
"\"Q1\" and \"Q2\" refer to the two questions crowdsourcers were asked (see Sec.",
"4.2 for details).",
"Line 1 measures the level of consistency between crowdsourcers and the authors and line 2 measures the level of consistency among the crowdsourcers themselves.",
"We continued to annotate INTENTION and OPINION which create orthogonal branches on the main axis.",
"In the first step, crowdsourcers achieved an accuracy on gold of .82 and a WAWA of .89.",
"Since only 16% of the events are in this category and these axes are usually very short (e.g., allocate funds to build a museum.",
"), the annotation task is relatively small and two experts took the second step and achieved an agreement of .86 (F 1 ).",
"We name our new dataset MATRES for Multi-Axis Temporal RElations for Start-points.",
"Each individual judgement cost us $0.01 and MATRES in total cost about $400 for 36 documents.",
"Comparison to TB-Dense To get another checkpoint of the quality of the new dataset, we compare with the annotations of TB-Dense.",
"TB-Dense has 1.1K verb events, between which 3.4K event-event (EE) relations are annotated.",
"In the new dataset, 72% of the events (0.8K) are anchored onto the main axis, resulting in 1.6K EE relations, and 16% (0.2K) are anchored onto orthogonal axes, resulting in 0.2K EE relations.",
"The following comparison is based on the 1.8K EE relations in common.",
"Moreover, since TB-Dense annotations are for intervals instead of start-points only, we converted TB-Dense's interval relations to start-point relations (e.g., if A includes B, then t A start is before t B start ).",
"The confusion matrix is shown in Table 5 .",
"A few remarks about how to understand it: First, when TB-Dense labels before or after, MATRES also has a high-probability of having the same label (b=455/513=.89, a=309/438=.71); when MATRES labels vague, TB-Dense is also very likely to label vague (v=192/312=.62).",
"This indicates the high agreement level between the two datasets if the interval-or point-based annotation difference is ruled out.",
"Second, many vague relations in TB-Dense are labeled as before, after or equal in MATRES.",
"This is expected because TB-Dense annotates relations between intervals, while MATRES annotates start-points.",
"When durative events are involved, the problem usually becomes more difficult and interval-based annotation is more likely to label vague (see earlier discussions in Sec.",
"3).",
"Example 7 shows three typical cases, where e14:became, e17:backed, e18:rose and e19:extending can be considered durative.",
"If only their start-points are considered, the crowdsourcers were correct in labeling e14 before e15, e16 after e17, and e18 equal to e19, although TB-Dense says vague for all of them.",
"Third, equal seems to be the relation that the two dataset mostly disagree on, which is probably due to crowdsourcers' lack of understanding in time granularity and event coreference.",
"Although equal relations only constitutes a small portion in all relations, it needs further investigation.",
"Baseline System We develop a baseline system for TempRel extraction on MATRES, assuming that all the events and axes are given.",
"The following commonly-Example 7: Typical cases that TB-Dense annotated vague but MATRES annotated before, after, and equal, respectively.",
"At one point , when it (e14:became) clear controllers could not contact the plane, someone (e15:said) a prayer.",
"TB-Dense: vague; MATRES: before The US is bolstering its military presence in the gulf, as President Clinton (e16:discussed) the Iraq crisis with the one ally who has (e17:backed) his threat of force, British prime minister Tony Blair.",
"TB-Dense: vague; MATRES: after Average hourly earnings of nonsupervisory employees (e18:rose) to $12.51.",
"The gain left wages 3.8 percent higher than a year earlier, (e19:extending) a trend that has given back to workers some of the earning power they lost to inflation in the last decade.",
"TB-Dense: vague; MATRES: equal used features for each event pair are used: (i) The part-of-speech (POS) tags of each individual event and of its neighboring three words.",
"(ii) The sentence and token distance between the two events.",
"(iii) The appearance of any modal verb between the two event mentions in text (i.e., will, would, can, could, may and might).",
"(iv) The appearance of any temporal connectives between the two event mentions (e.g., before, after and since).",
"(v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998) .",
"(vi) Whether the input event mentions have a common derivational form derived from WordNet.",
"(vii) The head words of the preposition phrases that cover each event, respectively.",
"And (viii) event properties such as Aspect, Modality, and Polarity that come with the TimeBank dataset and are commonly used as features.",
"The proposed baseline system uses the averaged perceptron algorithm to classify the relation between each event pair into one of the four relation types.",
"We adopted the same train/dev/test split of TB-Dense, where there are 22 documents in train, 5 in dev, and 9 in test.",
"Parameters were tuned on the train-set to maximize its F 1 on the dev-set, after which the classifier was retrained on the union of train and dev.",
"A detailed analysis of the baseline system is provided in Table 6 .",
"The performance on equal and vague is lower than on before and after, probably due to shortage in these labels in the training data and the inherent difficulty in event coreference and temporal vagueness.",
"We can see, though, that the overall performance on MATRES is much better than those in the literature for TempRel extraction, which used to be in the low 50's Ning et al., 2017 ).",
"The same system was also retrained and tested on the original annotations of TB-Dense (Line \"Original\"), which confirms the significant improvement if the proposed annotation scheme is used.",
"Note that we do not mean to say that the proposed baseline system itself is better than other existing algorithms, but rather that the proposed annotation scheme and the resulting dataset lead to better defined machine learning tasks.",
"In the future, more data can be collected and used with advanced techniques such as ILP (Do et al., 2012) , structured learning (Ning et al., 2017) or multi-sieve .",
"Table 6 : Performance of the proposed baseline system on MATRES.",
"Line \"Original\" is the same system retrained on the original TB-Dense and tested on the same subset of event pairs.",
"Due to the limited number of equal examples, the system did not make any equal predictions on the testset.",
"Conclusion This paper proposes a new scheme for TempRel annotation between events, simplifying the task by focusing on a single time axis at a time.",
"We have also identified that end-points of events is a major source of confusion during annotation due to reasons beyond the scope of TempRel annotation, and proposed to focus on start-points only and handle the end-points issue in further investigation (e.g., in event duration annotation tasks).",
"Pilot study by expert annotators shows significant IAA improvements compared to literature values, indicating a better task definition under the proposed scheme.",
"This further enables the usage of crowdsourcing to collect a new dataset, MATRES, at a lower time cost.",
"Analysis shows that MATRES, albeit crowdsourced, has achieved a reasonably good agreement level, as confirmed by its performance on the gold set (agreement with the authors), the WAWA metric (agreement with the crowdsourcers themselves), and consistency with TB-Dense (agreement with an existing dataset).",
"Given the fact that existing schemes suffer from low IAAs and lack of data, we hope that the findings in this work would provide a good start towards understanding more sophisticated semantic phenomena in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.3.1",
"2.3.2",
"2.3.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Temporal Structure of Events",
"Motivation",
"Multi-Axis Modeling",
"Comparisons with Existing Work",
"Axis Projection",
"Introduction of the Orthogonal Axes",
"Differences from Factuality",
"Interval Splitting",
"Ambiguity of End-Points",
"Annotation Scheme Design",
"Quality Control for Crowdsourcing",
"Vague Relations",
"Corpus Statistics and Quality",
"Comparison to TB-Dense",
"Baseline System",
"Conclusion"
]
} | GEM-SciDuet-train-123#paper-1336#slide-4 | 1 temporal structure modeling existing annotation schemes | Police tried to eliminate the pro-independence army and restore order. At least 51 people were killed in clashes between police and citizens in the troubled region.
Task: to annotate the TempRels between the bold faced events
(according to their start-points).
Existing Scheme 1: General graph modeling (e.g., TimeBank, ~2007)
Annotators freely add TempRels between those events.
Its inevitable that some TempRels will be missed,
Pointed out in many works.
E.g., only one relation between eliminate and restore is annotated in
TimeBank, while other relations such as tried is before eliminate and
tried is also before killed are missed.
Existing Scheme 2: Chain modeling (e.g., TimeBank-Dense ~2014)
All event pairs are presented, one-by-one, and an annotator must
provide a label for each of them.
No missing relations anymore.
Rationale: In the physical world, time is one dimensional, so we should
be able to temporally compare any two events.
However, some pairs of events are very confusing, resulting in low
E.g., whats the relation between restore and killed? | Police tried to eliminate the pro-independence army and restore order. At least 51 people were killed in clashes between police and citizens in the troubled region.
Task: to annotate the TempRels between the bold faced events
(according to their start-points).
Existing Scheme 1: General graph modeling (e.g., TimeBank, ~2007)
Annotators freely add TempRels between those events.
Its inevitable that some TempRels will be missed,
Pointed out in many works.
E.g., only one relation between eliminate and restore is annotated in
TimeBank, while other relations such as tried is before eliminate and
tried is also before killed are missed.
Existing Scheme 2: Chain modeling (e.g., TimeBank-Dense ~2014)
All event pairs are presented, one-by-one, and an annotator must
provide a label for each of them.
No missing relations anymore.
Rationale: In the physical world, time is one dimensional, so we should
be able to temporally compare any two events.
However, some pairs of events are very confusing, resulting in low
E.g., whats the relation between restore and killed? | [] |
GEM-SciDuet-train-123#paper-1336#slide-5 | 1336 | A Multi-Axis Annotation Scheme for Event Temporal Relations | Existing temporal relation (TempRel) annotation schemes often have low interannotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition. This paper proposes a new multi-axis modeling to better capture the temporal structure of events. In addition, we identify that event end-points are a major source of confusion in annotation, so we also propose to annotate TempRels based on start-points only. A pilot expert annotation effort using the proposed scheme shows significant improvement in IAA from the conventional 60's to 80's (Cohen's Kappa). This better-defined annotation scheme further enables the use of crowdsourcing to alleviate the labor intensity for each annotator. We hope that this work can foster more interesting studies towards event understanding. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259
],
"paper_content_text": [
"Introduction Temporal relation (TempRel) extraction is an important task for event understanding, and it has drawn much attention in the natural language processing (NLP) community recently (UzZaman et al., 2013; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015 Bethard et al., , 2016 Bethard et al., , 2017 Leeuwenberg and Moens, 2017; Ning et al., 2017 Ning et al., , 2018a .",
"Initiated by TimeBank (TB) (Pustejovsky et al., 2003b) , a number of TempRel datasets have been collected, including but not limited to the verbclause augmentation to TB (Bethard et al., 2007) , TempEval1-3 (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , TimeBank-Dense (TB-Dense) , EventTimeCorpus (Reimers et al., 2016) , and datasets with both temporal and other types of relations (e.g., coreference and causality) such as CaTeRs (Mostafazadeh et al., 2016) and RED (O'Gorman et al., 2016) .",
"These datasets were annotated by experts, but most still suffered from low inter-annotator agreements (IAA).",
"For instance, the IAAs of TB-Dense, RED and THYME-TimeML (Styler IV et al., 2014) were only below or near 60% (given that events are already annotated).",
"Since a low IAA usually indicates that the task is difficult even for humans (see Examples 1-3), the community has been looking into ways to simplify the task, by reducing the label set, and by breaking up the overall, complex task into subtasks (e.g., getting agreement on which event pairs should have a relation, and then what that relation should be) (Mostafazadeh et al., 2016; O'Gorman et al., 2016) .",
"In contrast to other existing datasets, Bethard et al.",
"(2007) achieved an agreement as high as 90%, but the scope of its annotation was narrowed down to a very special verb-clause structure.",
"(e1, e2), (e3, e4), and (e5, e6): TempRels that are difficult even for humans.",
"Note that only relevant events are highlighted here.",
"Example 1: Serbian police tried to eliminate the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Example 2: Service industries (e3:showed) solid job gains, as did manufacturers, two areas expected to be hardest (e4:hit) when the effects of the Asian crisis hit the American economy.",
"Example 3: We will act again if we have evidence he is (e5:rebuilding) his weapons of mass destruction capabilities, senior officials say.",
"In a bit of television diplomacy, Iraq's deputy foreign minister (e6:responded) from Baghdad in less than one hour, saying that .",
".",
".",
"This paper proposes a new approach to handling these issues in TempRel annotation.",
"First, we introduce multi-axis modeling to represent the temporal structure of events, based on which we anchor events to different semantic axes; only events from the same axis will then be temporally compared (Sec.",
"2).",
"As explained later, those event pairs in Examples 1-3 are difficult because they represent different semantic phenomena and belong to different axes.",
"Second, while we represent an event pair using two time intervals (say, [t 1 start , t 1 end ] and [t 2 start , t 2 end ]), we suggest that comparisons involving end-points (e.g., t 1 end vs. t 2 end ) are typically more difficult than comparing start-points (i.e., t 1 start vs. t 2 start ); we attribute this to the ambiguity of expressing and perceiving durations of events (Coll-Florit and Gennari, 2011) .",
"We believe that this is an important consideration, and we propose in Sec.",
"3 that TempRel annotation should focus on start-points.",
"Using the proposed annotation scheme, a pilot study done by experts achieved a high IAA of .84 (Cohen's Kappa) on a subset of TB-Dense, in contrast to the conventional 60's.",
"In addition to the low IAA issue, TempRel annotation is also known to be labor intensive.",
"Our third contribution is that we facilitate, for the first time, the use of crowdsourcing to collect a new, high quality (under multiple metrics explained later) TempRel dataset.",
"We explain how the crowdsourcing quality was controlled and how vague relations were handled in Sec.",
"4, and present some statistics and the quality of the new dataset in Sec.",
"5.",
"A baseline system is also shown to achieve much better performance on the new dataset, when compared with system performance in the literature (Sec.",
"6).",
"The paper's results are very encouraging and hopefully, this work would significantly benefit research in this area.",
"Temporal Structure of Events Given a set of events, one important question in designing the TempRel annotation task is: which pairs of events should have a relation?",
"The answer to it depends on the modeling of the overall temporal structure of events.",
"Motivation TimeBank (Pustejovsky et al., 2003b) laid the foundation for many later TempRel corpora, e.g., (Bethard et al., 2007; UzZaman et al., 2013; Cas-sidy et al., 2014) .",
"2 In TimeBank, the annotators were allowed to label TempRels between any pairs of events.",
"This setup models the overall structure of events using a general graph, which made annotators inadvertently overlook some pairs, resulting in low IAAs and many false negatives.",
"Example 4: Dense Annotation Scheme.",
"Serbian police (e7:tried) to (e8:eliminate) the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Given 4 NON-GENERIC events above, the dense scheme presents 6 pairs to annotators one by one: (e7, e8), (e7, e1), (e7, e2), (e8, e1), (e8, e2), and (e1, e2).",
"Apparently, not all pairs are well-defined, e.g., (e8, e2) and (e1, e2), but annotators are forced to label all of them.",
"To address this issue, proposed a dense annotation scheme, TB-Dense, which annotates all event pairs within a sliding, two-sentence window (see Example 4).",
"It requires all TempRels between GENERIC 3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NON-GENERIC and the other one for the GENERIC.",
"However, as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts.",
"It is reasonable to argue that the attempt to e1:restore order happened before the conflict where 51 people were e2:killed; or, 51 people had been killed but order had not been restored yet, so e1:restore is after e2:killed.",
"Similarly, in Example 2, service industries and manufacturers were originally expected to be hardest e4:hit but actually e3:showed gains, so e4:hit is before e3:showed; however, one can also argue that the two areas had showed gains but had not been hit, so e4:hit is after e3:showed.",
"Again, e5:rebuilding is a hypothetical event: \"we will act if rebuilding is true\".",
"Readers do not know for sure if \"he is already rebuilding weapons but we have no evidence\", or \"he will be building weapons in the future\", so annotators may disagree on the relation between e5:rebuilding and e6:responded.",
"Despite, importantly, minimizing missing annota-2 EventTimeCorpus (Reimers et al., 2016) is based on TimeBank, but aims at anchoring events onto explicit time expressions in each document rather than annotating TempRels between events, which can be a good complementary to other TempRel datasets.",
"3 For example, lions eat meat is GENERIC.",
"tions, the current dense scheme forces annotators to label many such ill-defined pairs, resulting in low IAA.",
"Multi-Axis Modeling Arguably, an ideal annotator may figure out the above ambiguity by him/herself and mark them as vague, but it is not a feasible requirement for all annotators to stay clear-headed for hours; let alone crowdsourcers.",
"What makes things worse is that, after annotators spend a long time figuring out these difficult cases, whether they disagree with each other or agree on the vagueness, the final decisions for such cases will still be vague.",
"As another way to handle this dilemma, TB-Dense resorted to a 80% confidence rule: annotators were allowed to choose a label if one is 80% sure that it was the writer's intent.",
"However, as pointed out by TB-Dense, annotators are likely to have rather different understandings of 80% confidence and it will still end up with disagreements.",
"In contrast to these annotation difficulties, humans can easily grasp the meaning of news articles, implying a potential gap between the difficulty of the annotation task and the one of understanding the actual meaning of the text.",
"In Examples 1-3, the writers did not intend to explain the TempRels between those pairs, and the original annotators of TimeBank 4 did not label relations between those pairs either, which indicates that both writers and readers did not think the TempRels between these pairs were crucial.",
"Instead, what is crucial in these examples is that \"Serbian police tried to restore order but killed 51 people\", that \"two areas were expected to be hit but showed gains\", and that \"if he rebuilds weapons then we will act.\"",
"To \"restore order\", to be \"hardest hit\", and \"if he was rebuilding\" were only the intention of police, the opinion of economists, and the condition to act, respectively, and whether or not they actually happen is not the focus of those writers.",
"This discussion suggests that a single axis is too restrictive to represent the complex structure of NON-GENERIC events.",
"Instead, we need a modeling which is more restrictive than a general graph so that annotators can focus on relation annotation (rather than looking for pairs first), but also more flexible than a single axis so that ill-defined 4 Recall that they were given the entire article and only salient relations would be annotated.",
"relations are not forcibly annotated.",
"Specifically, we need axes for intentions, opinions, hypotheses, etc.",
"in addition to the main axis of an article.",
"We thus argue for multi-axis modeling, as defined in Table 1 .",
"Following the proposed modeling, Examples 1-3 can be represented as in Fig.",
"1 .",
"This modeling aims at capturing what the author has explicitly expressed and it only asks annotators to look at comparable pairs, rather than forcing them to make decisions on often vaguely defined pairs.",
"In practice, we annotate one axis at a time: we first classify if an event is anchorable onto a given axis (this is also called the anchorability annotation step); then we annotate every pair of anchorable events (i.e., the relation annotation step); finally, we can move to another axis and repeat the two steps above.",
"Note that ruling out cross-axis relations is only a strategy we adopt in this paper to separate well-defined relations from ill-defined relations.",
"We do not claim that cross-axis relations are unimportant; instead, as shown in Fig.",
"2 , we think that cross-axis relations are a different semantic phenomenon that requires additional investigation.",
"Comparisons with Existing Work There have been other proposals of temporal structure modelings (Bramsen et al., 2006; Bethard et al., 2012) , but in general, the semantic phenomena handled in our work are very different and complementary to them.",
"(Bramsen et al., 2006) introduces \"temporal segments\" (a fragment of text that does not exhibit abrupt changes) in the medical domain.",
"Similarly, their temporal segments can also be considered as a special temporal structure modeling.",
"But a key difference is that (Bramsen et al., 2006) only annotates inter-segment relations, ignoring intra-segment ones.",
"Since those segments are usually large chunks of text, the semantics handled in (Bramsen et al., 2006) is in a very coarse granularity (as pointed out by (Bramsen et al., 2006) ) and is thus different from ours.",
"(Bethard et al., 2012) proposes a tree structure for children's stories, which \"typically have simpler temporal structures\", as they pointed out.",
"Moreover, in their annotation, an event can only be linked to a single nearby event, even if multiple nearby events may exist, whereas we do not have such restrictions.",
"In addition, some of the semantic phenomena in Table 1 have been discussed in existing work.",
"Here we compare with them for a better positioning of the proposed scheme.",
"Axis Projection TB-Dense handled the incomparability between main-axis events and HYPOTHESIS/NEGATION by treating an event as having occurred if the event is HYPOTHESIS/NEGATION.",
"5 In our multiaxis modeling, the strategy adopted by TB-Dense falls into a more general approach, \"axis projection\".",
"That is, projecting events across different axes to handle the incomparability between any two axes (not limited to HYPOTHE-SIS/NEGATION).",
"Axis projection works well for certain event pairs like Asian crisis and e4:hardest hit in Example 2: as in Fig.",
"1 , Asian crisis is before expected, which is again before e4:hardest hit, so Asian crisis is before e4:hardest hit.",
"Generally, however, since there is no direct evidence that can guide the projection, annotators may have different projections (imagine projecting e5:rebuilding onto the main axis: is it in the past or in the future?).",
"As a result, axis projec-tion requires many specially designed guidelines or strong external knowledge.",
"Annotators have to rigidly follow the sometimes counter-intuitive guidelines or \"guess\" a label instead of looking for evidence in the text.",
"When strong external knowledge is involved in axis projection, it becomes a reasoning process and the resulting relations are a different type.",
"For example, a reader may reason that in Example 3, it is well-known that they did \"act again\", implying his e5:rebuilding had happened and is before e6:responded.",
"Another example is in Fig.",
"2 .",
"It is obvious that relations based on these projections are not the same with and more challenging than those same-axis relations, so in the current stage, we should focus on same-axis relations only.",
"tended the conference, the projection of submit a paper onto the main axis is clearly before attended.",
"However, this projection requires strong external knowledge that a paper should be submitted before attending a conference.",
"Again, this projection is only a guess based on our external knowledge and it is still open whether the paper is submitted or not.",
"Introduction of the Orthogonal Axes Another prominent difference to earlier work is the introduction of orthogonal axes, which has not been used in any existing work as we know.",
"A special property is that the intersection event of two axes can be compared to events from both, which can sometimes bridge events, e.g., in Fig.",
"1 , Asian crisis is seemingly before hardest hit due to their connections to expected.",
"Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well.",
"However, the \"hardest hit\" in \"Asian crisis before hardest hit\" is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true.",
"Nevertheless, OPINIONS are not always true and INTENTIONS are not always fulfilled.",
"In Example 5, e9:sponsoring and e10:resolve are the opinions of the West and the speaker, respectively; whether or not they are true depends on the au-thors' implications or the readers' understandings, which is often beyond the scope of TempRel annotation.",
"6 Example 6 demonstrates a similar situation for INTENTIONS: when reading the sentence of e11:report, people are inclined to believe that it is fulfilled.",
"But if we read the sentence of e12:report, we have reason to believe that it is not.",
"When it comes to e13:tell, it is unclear if everyone told the truth.",
"The existence of such examples indicates that orthogonal axes are a better modeling for INTENTIONS and OPINIONS.",
"Example 5: Opinion events may not always be true.",
"He is ostracized by the West for (e9:sponsoring) terrorism.",
"We need to (e10:resolve) the deep-seated causes that have resulted in these problems.",
"Example 6: Intentions may not always be fulfilled.",
"A passerby called the police to (e11:report) the body.",
"A passerby called the police to (e12:report) the body.",
"Unfortunately, the line was busy.",
"I asked everyone to (e13:tell) the truth.",
"Differences from Factuality Event modality have been discussed in many existing event annotation schemes, e.g., Event Nugget (Mitamura et al., 2015) , Rich ERE , and RED.",
"Generally, an event is classified as Actual or Non-Actual, a.k.a.",
"factuality (Saurí and Pustejovsky, 2009; Lee et al., 2015) .",
"The main-axis events defined in this paper seem to be very similar to Actual events, but with several important differences: First, future events are Non-Actual because they indeed have not happened, but they may be on the main axis.",
"Second, events that are not on the main axis can also be Actual events, e.g., intentions that are fulfilled, or opinions that are true.",
"Third, as demonstrated by Examples 5-6, identifying anchorability as defined in Table 1 is relatively easy, but judging if an event actually happened is often a high-level understanding task that requires an understanding of the entire document or external knowledge.",
"Interested readers are referred to Appendix B for a detailed analysis of the difference between Anchorable (onto the main axis) and Actual on a subset of RED.",
"Interval Splitting All existing annotation schemes adopt the interval representation of events (Allen, 1984) and there are 13 relations between two intervals (for readers who are not familiar with it, please see Fig.",
"4 in the appendix).",
"To reduce the burden of annotators, existing schemes often resort to a reduced set of the 13 relations.",
"For instance, Verhagen et al.",
"(2007) Let [t 1 start , t 1 end ] and [t 2 start , t 2 end ] be the time intervals of two events (with the implicit assumption that t start t end ).",
"Instead of reducing the relations between two intervals, we try to explicitly compare the time points (see Fig.",
"3 ).",
"In this way, the label set is simply before, after and equal, 7 while the expressivity remains the same.",
"This interval splitting technique has also been used in (Raghavan et al., 2012) .",
"Figure 3 : The comparison of two event time intervals, [t 1 start , t 1 end ] and [t 2 start , t 2 end ], can be decomposed into four comparisons t 1 start vs. t 2 start , t 1 start vs. t 2 end , t 1 end vs. t 2 start , and t 1 end vs. t 2 end , without loss of generality.",
"[!",
"\"#$%# & , !",
"()* & ] [!",
"\"#$%# + , !",
"()* + ] time In addition to same expressivity, interval splitting can provide even more information when the relation between two events is vague.",
"In the conventional setting, imagine that the annotators find that the relation between two events can be either before or before and overlap.",
"Then the resulting annotation will have to be vague, although the annotators actually agree on the relation between t 1 start and t 2 start .",
"Using interval splitting, however, such information can be preserved.",
"An obvious downside of interval splitting is the increased number of annotations needed (4 point comparisons vs. 1 interval comparison).",
"In practice, however, it is usually much fewer than 4 comparisons.",
"For example, when we see t 1 end < t 2 start (as in Fig.",
"3) , the other three can be skipped because they can all be inferred.",
"Moreover, although the number of annotations is increased, the work load for human annotators may still be the same, because even in the conventional scheme, they still need to think of the relations between start-and end-points before they can make a decision.",
"Ambiguity of End-Points During our pilot annotation, the annotation quality dropped significantly when the annotators needed to reason about relations involving end-points of events.",
"Table 2 shows four metrics of task difficulty when only t 1 start vs. t 2 start or t 1 end vs. t 2 end are annotated.",
"Non-anchorable events were removed for both jobs.",
"The first two metrics, qualifying pass rate and survival rate are related to the two quality control protocols (see Sec.",
"4.1 for details).",
"We can see that when annotating the relations between end-points, only one out of ten crowdsourcers (11%) could successfully pass our qualifying test; and even if they had passed it, half of them (56%) would have been kicked out in the middle of the task.",
"The third line is the overall accuracy on gold set from all crowdsourcers (excluding those who did not pass the qualifying test), which drops from 67% to 37% when annotating end-end relations.",
"The last line is the average response time per annotation and we can see that it takes much longer to label an end-end TempRel (52s) than a start-start TempRel (33s).",
"This important discovery indicates that the TempRels between end-points is probably governed by a different linguistic phenomenon.",
"Table 2 : Annotations involving the end-points of events are found to be much harder than only comparing the start-points.",
"We hypothesize that the difficulty is a mixture of how durative events are expressed (by authors) and perceived (by readers) in natural language.",
"In cognitive psychology, Coll-Florit and Gennari (2011) discovered that human readers take longer to perceive durative events than punctual events, e.g., owe 50 bucks vs. lost 50 bucks.",
"From the writer's standpoint, durations are usually fuzzy (Schockaert and De Cock, 2008) , or assumed to be a prior knowledge of readers (e.g., college takes 4 years and watching an NBA game takes a few hours), and thus not always written explicitly.",
"Given all these reasons, we ignore the comparison of end-points in this work, although event duration is indeed, another important task.",
"Annotation Scheme Design To summarize, with the proposed multi-axis modeling (Sec.",
"2) and interval splitting (Sec.",
"3), our annotation scheme is two-step.",
"First, we mark every event candidate as being temporally Anchorable or not (based on the time axis we are working on).",
"Second, we adopt the dense annotation scheme to label TempRels only between Anchorable events.",
"Note that we only work on verb events in this paper, so non-verb event candidates are also deleted in a preprocessing step.",
"We design crowdsourcing tasks for both steps and as we show later, high crowdsourcing quality was achieved on both tasks.",
"In this section, we will discuss some practical issues.",
"Quality Control for Crowdsourcing We take advantage of the quality control feature in CrowdFlower in our crowdsourcing jobs.",
"For any job, a set of examples are annotated by experts beforehand, which is considered gold and will serve two purposes.",
"(i) Qualifying test: Any crowdsourcer who wants to work on this job has to pass with 70% accuracy on 10 questions randomly selected from the gold set.",
"(ii) Surviving test: During the annotation process, questions from the gold set will be randomly given to crowdsourcers without notice, and one has to maintain 70% accuracy on the gold set till the end of the annotation; otherwise, he or she will be forbidden from working on this job anymore and all his/her annotations will be discarded.",
"At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision.",
"Vague Relations How to handle vague relations is another issue in temporal annotation.",
"In non-dense schemes, annotators usually skip the annotation of a vague pair.",
"In dense schemes, a majority agreement rule is applied as a postprocessing step to back off a decision to vague when annotators cannot pass a majority vote , which reminds us that annotators often label a vague relation as non-vague due to lack of thinking.",
"We decide to proactively reduce the possibility of such situations.",
"As mentioned earlier, our label set for t 1 start vs. t 2 start is before, after, equal and vague.",
"We ask two questions: Q1=Is it possible that t 1 start is before t 2 start ?",
"Q2=Is it possible that t 2 start is before t 1 start ?",
"Let the an-swers be A1 and A2.",
"Then we have a oneto-one mapping as follows: A1=A2=yes7 !vague, A1=A2=no7 !equal, A1=yes, A2=no7 !before, and A1=no, A2=yes7 !after.",
"An advantage is that one will be prompted to think about all possibilities, thus reducing the chance of overlook.",
"Finally, the annotation interface we used is shown in Appendix C. Corpus Statistics and Quality In this section, we first focus on annotations on the main axis, which is usually the primary storyline and thus has most events.",
"Before launching the crowdsourcing tasks, we checked the IAA between two experts on a subset of TB-Dense (about 100 events and 400 relations).",
"A Cohen's Kappa of .85 was achieved in the first step: anchorability annotation.",
"Only those events that both experts labeled Anchorable were kept before they moved onto the second step: relation annotation, for which the Cohen's Kappa was .90 for Q1 and .87 for Q2.",
"Table 3 furthermore shows the distribution, Cohen's Kappa, and F 1 of each label.",
"We can see the Kappa and F 1 of vague (=.75, F 1 =.81) are generally lower than those of the other labels, confirming that temporal vagueness is a more difficult semantic phenomenon.",
"Nevertheless, the overall IAA shown in Table 3 is a significant improvement compared to existing datasets.",
"With the improved IAA confirmed by experts, we sequentially launched the two-step crowdsourcing tasks through CrowdFlower on top of the same 36 documents of TB-Dense.",
"To evaluate how well the crowdsourcers performed on our task, we calculate two quality metrics: accuracy on the gold set and the Worker Agreement with Aggregate (WAWA).",
"WAWA indicates the average number of crowdsourcers' responses agreed with the aggregate answer (we used majority aggregation for each question).",
"For example, if N individual responses were obtained in total, and n of them were correct when compared to the aggregate answer, then WAWA is simply n/N .",
"In the first step, crowdsourcers labeled 28% of the events as Non-Anchorable to the main axis, with an accuracy on the gold of .86 and a WAWA of .79.",
"With Non-Anchorable events filtered, the relation annotation step was launched as another crowdsourcing task.",
"The label distribution is b=.",
"50, a=.28, e=.03, and v=.19 (consistent with Table 3 ).",
"In Table 4 , we show the annotation quality of this step using accuracy on the gold set and WAWA.",
"We can see that the crowdsourcers achieved a very good performance on the gold set, indicating that they are consistent with the authors who created the gold set; these crowdsourcers also achieved a high-level agreement under the WAWA metric, indicating that they are consistent among themselves.",
"These two metrics indicate that the annotation task is now well-defined and easy to understand even by non-experts.",
"Table 4 : Quality analysis of the relation annotation step of MATRES.",
"\"Q1\" and \"Q2\" refer to the two questions crowdsourcers were asked (see Sec.",
"4.2 for details).",
"Line 1 measures the level of consistency between crowdsourcers and the authors and line 2 measures the level of consistency among the crowdsourcers themselves.",
"We continued to annotate INTENTION and OPINION which create orthogonal branches on the main axis.",
"In the first step, crowdsourcers achieved an accuracy on gold of .82 and a WAWA of .89.",
"Since only 16% of the events are in this category and these axes are usually very short (e.g., allocate funds to build a museum.",
"), the annotation task is relatively small and two experts took the second step and achieved an agreement of .86 (F 1 ).",
"We name our new dataset MATRES for Multi-Axis Temporal RElations for Start-points.",
"Each individual judgement cost us $0.01 and MATRES in total cost about $400 for 36 documents.",
"Comparison to TB-Dense To get another checkpoint of the quality of the new dataset, we compare with the annotations of TB-Dense.",
"TB-Dense has 1.1K verb events, between which 3.4K event-event (EE) relations are annotated.",
"In the new dataset, 72% of the events (0.8K) are anchored onto the main axis, resulting in 1.6K EE relations, and 16% (0.2K) are anchored onto orthogonal axes, resulting in 0.2K EE relations.",
"The following comparison is based on the 1.8K EE relations in common.",
"Moreover, since TB-Dense annotations are for intervals instead of start-points only, we converted TB-Dense's interval relations to start-point relations (e.g., if A includes B, then t A start is before t B start ).",
"The confusion matrix is shown in Table 5 .",
"A few remarks about how to understand it: First, when TB-Dense labels before or after, MATRES also has a high-probability of having the same label (b=455/513=.89, a=309/438=.71); when MATRES labels vague, TB-Dense is also very likely to label vague (v=192/312=.62).",
"This indicates the high agreement level between the two datasets if the interval-or point-based annotation difference is ruled out.",
"Second, many vague relations in TB-Dense are labeled as before, after or equal in MATRES.",
"This is expected because TB-Dense annotates relations between intervals, while MATRES annotates start-points.",
"When durative events are involved, the problem usually becomes more difficult and interval-based annotation is more likely to label vague (see earlier discussions in Sec.",
"3).",
"Example 7 shows three typical cases, where e14:became, e17:backed, e18:rose and e19:extending can be considered durative.",
"If only their start-points are considered, the crowdsourcers were correct in labeling e14 before e15, e16 after e17, and e18 equal to e19, although TB-Dense says vague for all of them.",
"Third, equal seems to be the relation that the two dataset mostly disagree on, which is probably due to crowdsourcers' lack of understanding in time granularity and event coreference.",
"Although equal relations only constitutes a small portion in all relations, it needs further investigation.",
"Baseline System We develop a baseline system for TempRel extraction on MATRES, assuming that all the events and axes are given.",
"The following commonly-Example 7: Typical cases that TB-Dense annotated vague but MATRES annotated before, after, and equal, respectively.",
"At one point , when it (e14:became) clear controllers could not contact the plane, someone (e15:said) a prayer.",
"TB-Dense: vague; MATRES: before The US is bolstering its military presence in the gulf, as President Clinton (e16:discussed) the Iraq crisis with the one ally who has (e17:backed) his threat of force, British prime minister Tony Blair.",
"TB-Dense: vague; MATRES: after Average hourly earnings of nonsupervisory employees (e18:rose) to $12.51.",
"The gain left wages 3.8 percent higher than a year earlier, (e19:extending) a trend that has given back to workers some of the earning power they lost to inflation in the last decade.",
"TB-Dense: vague; MATRES: equal used features for each event pair are used: (i) The part-of-speech (POS) tags of each individual event and of its neighboring three words.",
"(ii) The sentence and token distance between the two events.",
"(iii) The appearance of any modal verb between the two event mentions in text (i.e., will, would, can, could, may and might).",
"(iv) The appearance of any temporal connectives between the two event mentions (e.g., before, after and since).",
"(v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998) .",
"(vi) Whether the input event mentions have a common derivational form derived from WordNet.",
"(vii) The head words of the preposition phrases that cover each event, respectively.",
"And (viii) event properties such as Aspect, Modality, and Polarity that come with the TimeBank dataset and are commonly used as features.",
"The proposed baseline system uses the averaged perceptron algorithm to classify the relation between each event pair into one of the four relation types.",
"We adopted the same train/dev/test split of TB-Dense, where there are 22 documents in train, 5 in dev, and 9 in test.",
"Parameters were tuned on the train-set to maximize its F 1 on the dev-set, after which the classifier was retrained on the union of train and dev.",
"A detailed analysis of the baseline system is provided in Table 6 .",
"The performance on equal and vague is lower than on before and after, probably due to shortage in these labels in the training data and the inherent difficulty in event coreference and temporal vagueness.",
"We can see, though, that the overall performance on MATRES is much better than those in the literature for TempRel extraction, which used to be in the low 50's Ning et al., 2017 ).",
"The same system was also retrained and tested on the original annotations of TB-Dense (Line \"Original\"), which confirms the significant improvement if the proposed annotation scheme is used.",
"Note that we do not mean to say that the proposed baseline system itself is better than other existing algorithms, but rather that the proposed annotation scheme and the resulting dataset lead to better defined machine learning tasks.",
"In the future, more data can be collected and used with advanced techniques such as ILP (Do et al., 2012) , structured learning (Ning et al., 2017) or multi-sieve .",
"Table 6 : Performance of the proposed baseline system on MATRES.",
"Line \"Original\" is the same system retrained on the original TB-Dense and tested on the same subset of event pairs.",
"Due to the limited number of equal examples, the system did not make any equal predictions on the testset.",
"Conclusion This paper proposes a new scheme for TempRel annotation between events, simplifying the task by focusing on a single time axis at a time.",
"We have also identified that end-points of events is a major source of confusion during annotation due to reasons beyond the scope of TempRel annotation, and proposed to focus on start-points only and handle the end-points issue in further investigation (e.g., in event duration annotation tasks).",
"Pilot study by expert annotators shows significant IAA improvements compared to literature values, indicating a better task definition under the proposed scheme.",
"This further enables the usage of crowdsourcing to collect a new dataset, MATRES, at a lower time cost.",
"Analysis shows that MATRES, albeit crowdsourced, has achieved a reasonably good agreement level, as confirmed by its performance on the gold set (agreement with the authors), the WAWA metric (agreement with the crowdsourcers themselves), and consistency with TB-Dense (agreement with an existing dataset).",
"Given the fact that existing schemes suffer from low IAAs and lack of data, we hope that the findings in this work would provide a good start towards understanding more sophisticated semantic phenomena in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.3.1",
"2.3.2",
"2.3.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Temporal Structure of Events",
"Motivation",
"Multi-Axis Modeling",
"Comparisons with Existing Work",
"Axis Projection",
"Introduction of the Orthogonal Axes",
"Differences from Factuality",
"Interval Splitting",
"Ambiguity of End-Points",
"Annotation Scheme Design",
"Quality Control for Crowdsourcing",
"Vague Relations",
"Corpus Statistics and Quality",
"Comparison to TB-Dense",
"Baseline System",
"Conclusion"
]
} | GEM-SciDuet-train-123#paper-1336#slide-5 | 1 temporal structure modeling difficulty | Police tried to eliminate the pro-independence army and restore order. At least 51 people were killed in clashes between police and citizens in the troubled region.
Why is restore vs killed confusing?
One possible explanation: the text doesnt provide evidence that the
restore event actually happened, while killed actually happened
So, non-actual events dont have temporal relations?
We dont think so:
tried is obviously before restore: actual vs non-actual eliminate is obviously before restore: non-actual vs non-actual
So relations may exist between non-actual events. | Police tried to eliminate the pro-independence army and restore order. At least 51 people were killed in clashes between police and citizens in the troubled region.
Why is restore vs killed confusing?
One possible explanation: the text doesnt provide evidence that the
restore event actually happened, while killed actually happened
So, non-actual events dont have temporal relations?
We dont think so:
tried is obviously before restore: actual vs non-actual eliminate is obviously before restore: non-actual vs non-actual
So relations may exist between non-actual events. | [] |
GEM-SciDuet-train-123#paper-1336#slide-6 | 1336 | A Multi-Axis Annotation Scheme for Event Temporal Relations | Existing temporal relation (TempRel) annotation schemes often have low interannotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition. This paper proposes a new multi-axis modeling to better capture the temporal structure of events. In addition, we identify that event end-points are a major source of confusion in annotation, so we also propose to annotate TempRels based on start-points only. A pilot expert annotation effort using the proposed scheme shows significant improvement in IAA from the conventional 60's to 80's (Cohen's Kappa). This better-defined annotation scheme further enables the use of crowdsourcing to alleviate the labor intensity for each annotator. We hope that this work can foster more interesting studies towards event understanding. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259
],
"paper_content_text": [
"Introduction Temporal relation (TempRel) extraction is an important task for event understanding, and it has drawn much attention in the natural language processing (NLP) community recently (UzZaman et al., 2013; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015 Bethard et al., , 2016 Bethard et al., , 2017 Leeuwenberg and Moens, 2017; Ning et al., 2017 Ning et al., , 2018a .",
"Initiated by TimeBank (TB) (Pustejovsky et al., 2003b) , a number of TempRel datasets have been collected, including but not limited to the verbclause augmentation to TB (Bethard et al., 2007) , TempEval1-3 (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , TimeBank-Dense (TB-Dense) , EventTimeCorpus (Reimers et al., 2016) , and datasets with both temporal and other types of relations (e.g., coreference and causality) such as CaTeRs (Mostafazadeh et al., 2016) and RED (O'Gorman et al., 2016) .",
"These datasets were annotated by experts, but most still suffered from low inter-annotator agreements (IAA).",
"For instance, the IAAs of TB-Dense, RED and THYME-TimeML (Styler IV et al., 2014) were only below or near 60% (given that events are already annotated).",
"Since a low IAA usually indicates that the task is difficult even for humans (see Examples 1-3), the community has been looking into ways to simplify the task, by reducing the label set, and by breaking up the overall, complex task into subtasks (e.g., getting agreement on which event pairs should have a relation, and then what that relation should be) (Mostafazadeh et al., 2016; O'Gorman et al., 2016) .",
"In contrast to other existing datasets, Bethard et al.",
"(2007) achieved an agreement as high as 90%, but the scope of its annotation was narrowed down to a very special verb-clause structure.",
"(e1, e2), (e3, e4), and (e5, e6): TempRels that are difficult even for humans.",
"Note that only relevant events are highlighted here.",
"Example 1: Serbian police tried to eliminate the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Example 2: Service industries (e3:showed) solid job gains, as did manufacturers, two areas expected to be hardest (e4:hit) when the effects of the Asian crisis hit the American economy.",
"Example 3: We will act again if we have evidence he is (e5:rebuilding) his weapons of mass destruction capabilities, senior officials say.",
"In a bit of television diplomacy, Iraq's deputy foreign minister (e6:responded) from Baghdad in less than one hour, saying that .",
".",
".",
"This paper proposes a new approach to handling these issues in TempRel annotation.",
"First, we introduce multi-axis modeling to represent the temporal structure of events, based on which we anchor events to different semantic axes; only events from the same axis will then be temporally compared (Sec.",
"2).",
"As explained later, those event pairs in Examples 1-3 are difficult because they represent different semantic phenomena and belong to different axes.",
"Second, while we represent an event pair using two time intervals (say, [t 1 start , t 1 end ] and [t 2 start , t 2 end ]), we suggest that comparisons involving end-points (e.g., t 1 end vs. t 2 end ) are typically more difficult than comparing start-points (i.e., t 1 start vs. t 2 start ); we attribute this to the ambiguity of expressing and perceiving durations of events (Coll-Florit and Gennari, 2011) .",
"We believe that this is an important consideration, and we propose in Sec.",
"3 that TempRel annotation should focus on start-points.",
"Using the proposed annotation scheme, a pilot study done by experts achieved a high IAA of .84 (Cohen's Kappa) on a subset of TB-Dense, in contrast to the conventional 60's.",
"In addition to the low IAA issue, TempRel annotation is also known to be labor intensive.",
"Our third contribution is that we facilitate, for the first time, the use of crowdsourcing to collect a new, high quality (under multiple metrics explained later) TempRel dataset.",
"We explain how the crowdsourcing quality was controlled and how vague relations were handled in Sec.",
"4, and present some statistics and the quality of the new dataset in Sec.",
"5.",
"A baseline system is also shown to achieve much better performance on the new dataset, when compared with system performance in the literature (Sec.",
"6).",
"The paper's results are very encouraging and hopefully, this work would significantly benefit research in this area.",
"Temporal Structure of Events Given a set of events, one important question in designing the TempRel annotation task is: which pairs of events should have a relation?",
"The answer to it depends on the modeling of the overall temporal structure of events.",
"Motivation TimeBank (Pustejovsky et al., 2003b) laid the foundation for many later TempRel corpora, e.g., (Bethard et al., 2007; UzZaman et al., 2013; Cas-sidy et al., 2014) .",
"2 In TimeBank, the annotators were allowed to label TempRels between any pairs of events.",
"This setup models the overall structure of events using a general graph, which made annotators inadvertently overlook some pairs, resulting in low IAAs and many false negatives.",
"Example 4: Dense Annotation Scheme.",
"Serbian police (e7:tried) to (e8:eliminate) the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Given 4 NON-GENERIC events above, the dense scheme presents 6 pairs to annotators one by one: (e7, e8), (e7, e1), (e7, e2), (e8, e1), (e8, e2), and (e1, e2).",
"Apparently, not all pairs are well-defined, e.g., (e8, e2) and (e1, e2), but annotators are forced to label all of them.",
"To address this issue, proposed a dense annotation scheme, TB-Dense, which annotates all event pairs within a sliding, two-sentence window (see Example 4).",
"It requires all TempRels between GENERIC 3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NON-GENERIC and the other one for the GENERIC.",
"However, as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts.",
"It is reasonable to argue that the attempt to e1:restore order happened before the conflict where 51 people were e2:killed; or, 51 people had been killed but order had not been restored yet, so e1:restore is after e2:killed.",
"Similarly, in Example 2, service industries and manufacturers were originally expected to be hardest e4:hit but actually e3:showed gains, so e4:hit is before e3:showed; however, one can also argue that the two areas had showed gains but had not been hit, so e4:hit is after e3:showed.",
"Again, e5:rebuilding is a hypothetical event: \"we will act if rebuilding is true\".",
"Readers do not know for sure if \"he is already rebuilding weapons but we have no evidence\", or \"he will be building weapons in the future\", so annotators may disagree on the relation between e5:rebuilding and e6:responded.",
"Despite, importantly, minimizing missing annota-2 EventTimeCorpus (Reimers et al., 2016) is based on TimeBank, but aims at anchoring events onto explicit time expressions in each document rather than annotating TempRels between events, which can be a good complementary to other TempRel datasets.",
"3 For example, lions eat meat is GENERIC.",
"tions, the current dense scheme forces annotators to label many such ill-defined pairs, resulting in low IAA.",
"Multi-Axis Modeling Arguably, an ideal annotator may figure out the above ambiguity by him/herself and mark them as vague, but it is not a feasible requirement for all annotators to stay clear-headed for hours; let alone crowdsourcers.",
"What makes things worse is that, after annotators spend a long time figuring out these difficult cases, whether they disagree with each other or agree on the vagueness, the final decisions for such cases will still be vague.",
"As another way to handle this dilemma, TB-Dense resorted to a 80% confidence rule: annotators were allowed to choose a label if one is 80% sure that it was the writer's intent.",
"However, as pointed out by TB-Dense, annotators are likely to have rather different understandings of 80% confidence and it will still end up with disagreements.",
"In contrast to these annotation difficulties, humans can easily grasp the meaning of news articles, implying a potential gap between the difficulty of the annotation task and the one of understanding the actual meaning of the text.",
"In Examples 1-3, the writers did not intend to explain the TempRels between those pairs, and the original annotators of TimeBank 4 did not label relations between those pairs either, which indicates that both writers and readers did not think the TempRels between these pairs were crucial.",
"Instead, what is crucial in these examples is that \"Serbian police tried to restore order but killed 51 people\", that \"two areas were expected to be hit but showed gains\", and that \"if he rebuilds weapons then we will act.\"",
"To \"restore order\", to be \"hardest hit\", and \"if he was rebuilding\" were only the intention of police, the opinion of economists, and the condition to act, respectively, and whether or not they actually happen is not the focus of those writers.",
"This discussion suggests that a single axis is too restrictive to represent the complex structure of NON-GENERIC events.",
"Instead, we need a modeling which is more restrictive than a general graph so that annotators can focus on relation annotation (rather than looking for pairs first), but also more flexible than a single axis so that ill-defined 4 Recall that they were given the entire article and only salient relations would be annotated.",
"relations are not forcibly annotated.",
"Specifically, we need axes for intentions, opinions, hypotheses, etc.",
"in addition to the main axis of an article.",
"We thus argue for multi-axis modeling, as defined in Table 1 .",
"Following the proposed modeling, Examples 1-3 can be represented as in Fig.",
"1 .",
"This modeling aims at capturing what the author has explicitly expressed and it only asks annotators to look at comparable pairs, rather than forcing them to make decisions on often vaguely defined pairs.",
"In practice, we annotate one axis at a time: we first classify if an event is anchorable onto a given axis (this is also called the anchorability annotation step); then we annotate every pair of anchorable events (i.e., the relation annotation step); finally, we can move to another axis and repeat the two steps above.",
"Note that ruling out cross-axis relations is only a strategy we adopt in this paper to separate well-defined relations from ill-defined relations.",
"We do not claim that cross-axis relations are unimportant; instead, as shown in Fig.",
"2 , we think that cross-axis relations are a different semantic phenomenon that requires additional investigation.",
"Comparisons with Existing Work There have been other proposals of temporal structure modelings (Bramsen et al., 2006; Bethard et al., 2012) , but in general, the semantic phenomena handled in our work are very different and complementary to them.",
"(Bramsen et al., 2006) introduces \"temporal segments\" (a fragment of text that does not exhibit abrupt changes) in the medical domain.",
"Similarly, their temporal segments can also be considered as a special temporal structure modeling.",
"But a key difference is that (Bramsen et al., 2006) only annotates inter-segment relations, ignoring intra-segment ones.",
"Since those segments are usually large chunks of text, the semantics handled in (Bramsen et al., 2006) is in a very coarse granularity (as pointed out by (Bramsen et al., 2006) ) and is thus different from ours.",
"(Bethard et al., 2012) proposes a tree structure for children's stories, which \"typically have simpler temporal structures\", as they pointed out.",
"Moreover, in their annotation, an event can only be linked to a single nearby event, even if multiple nearby events may exist, whereas we do not have such restrictions.",
"In addition, some of the semantic phenomena in Table 1 have been discussed in existing work.",
"Here we compare with them for a better positioning of the proposed scheme.",
"Axis Projection TB-Dense handled the incomparability between main-axis events and HYPOTHESIS/NEGATION by treating an event as having occurred if the event is HYPOTHESIS/NEGATION.",
"5 In our multiaxis modeling, the strategy adopted by TB-Dense falls into a more general approach, \"axis projection\".",
"That is, projecting events across different axes to handle the incomparability between any two axes (not limited to HYPOTHE-SIS/NEGATION).",
"Axis projection works well for certain event pairs like Asian crisis and e4:hardest hit in Example 2: as in Fig.",
"1 , Asian crisis is before expected, which is again before e4:hardest hit, so Asian crisis is before e4:hardest hit.",
"Generally, however, since there is no direct evidence that can guide the projection, annotators may have different projections (imagine projecting e5:rebuilding onto the main axis: is it in the past or in the future?).",
"As a result, axis projec-tion requires many specially designed guidelines or strong external knowledge.",
"Annotators have to rigidly follow the sometimes counter-intuitive guidelines or \"guess\" a label instead of looking for evidence in the text.",
"When strong external knowledge is involved in axis projection, it becomes a reasoning process and the resulting relations are a different type.",
"For example, a reader may reason that in Example 3, it is well-known that they did \"act again\", implying his e5:rebuilding had happened and is before e6:responded.",
"Another example is in Fig.",
"2 .",
"It is obvious that relations based on these projections are not the same with and more challenging than those same-axis relations, so in the current stage, we should focus on same-axis relations only.",
"tended the conference, the projection of submit a paper onto the main axis is clearly before attended.",
"However, this projection requires strong external knowledge that a paper should be submitted before attending a conference.",
"Again, this projection is only a guess based on our external knowledge and it is still open whether the paper is submitted or not.",
"Introduction of the Orthogonal Axes Another prominent difference to earlier work is the introduction of orthogonal axes, which has not been used in any existing work as we know.",
"A special property is that the intersection event of two axes can be compared to events from both, which can sometimes bridge events, e.g., in Fig.",
"1 , Asian crisis is seemingly before hardest hit due to their connections to expected.",
"Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well.",
"However, the \"hardest hit\" in \"Asian crisis before hardest hit\" is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true.",
"Nevertheless, OPINIONS are not always true and INTENTIONS are not always fulfilled.",
"In Example 5, e9:sponsoring and e10:resolve are the opinions of the West and the speaker, respectively; whether or not they are true depends on the au-thors' implications or the readers' understandings, which is often beyond the scope of TempRel annotation.",
"6 Example 6 demonstrates a similar situation for INTENTIONS: when reading the sentence of e11:report, people are inclined to believe that it is fulfilled.",
"But if we read the sentence of e12:report, we have reason to believe that it is not.",
"When it comes to e13:tell, it is unclear if everyone told the truth.",
"The existence of such examples indicates that orthogonal axes are a better modeling for INTENTIONS and OPINIONS.",
"Example 5: Opinion events may not always be true.",
"He is ostracized by the West for (e9:sponsoring) terrorism.",
"We need to (e10:resolve) the deep-seated causes that have resulted in these problems.",
"Example 6: Intentions may not always be fulfilled.",
"A passerby called the police to (e11:report) the body.",
"A passerby called the police to (e12:report) the body.",
"Unfortunately, the line was busy.",
"I asked everyone to (e13:tell) the truth.",
"Differences from Factuality Event modality have been discussed in many existing event annotation schemes, e.g., Event Nugget (Mitamura et al., 2015) , Rich ERE , and RED.",
"Generally, an event is classified as Actual or Non-Actual, a.k.a.",
"factuality (Saurí and Pustejovsky, 2009; Lee et al., 2015) .",
"The main-axis events defined in this paper seem to be very similar to Actual events, but with several important differences: First, future events are Non-Actual because they indeed have not happened, but they may be on the main axis.",
"Second, events that are not on the main axis can also be Actual events, e.g., intentions that are fulfilled, or opinions that are true.",
"Third, as demonstrated by Examples 5-6, identifying anchorability as defined in Table 1 is relatively easy, but judging if an event actually happened is often a high-level understanding task that requires an understanding of the entire document or external knowledge.",
"Interested readers are referred to Appendix B for a detailed analysis of the difference between Anchorable (onto the main axis) and Actual on a subset of RED.",
"Interval Splitting All existing annotation schemes adopt the interval representation of events (Allen, 1984) and there are 13 relations between two intervals (for readers who are not familiar with it, please see Fig.",
"4 in the appendix).",
"To reduce the burden of annotators, existing schemes often resort to a reduced set of the 13 relations.",
"For instance, Verhagen et al.",
"(2007) Let [t 1 start , t 1 end ] and [t 2 start , t 2 end ] be the time intervals of two events (with the implicit assumption that t start t end ).",
"Instead of reducing the relations between two intervals, we try to explicitly compare the time points (see Fig.",
"3 ).",
"In this way, the label set is simply before, after and equal, 7 while the expressivity remains the same.",
"This interval splitting technique has also been used in (Raghavan et al., 2012) .",
"Figure 3 : The comparison of two event time intervals, [t 1 start , t 1 end ] and [t 2 start , t 2 end ], can be decomposed into four comparisons t 1 start vs. t 2 start , t 1 start vs. t 2 end , t 1 end vs. t 2 start , and t 1 end vs. t 2 end , without loss of generality.",
"[!",
"\"#$%# & , !",
"()* & ] [!",
"\"#$%# + , !",
"()* + ] time In addition to same expressivity, interval splitting can provide even more information when the relation between two events is vague.",
"In the conventional setting, imagine that the annotators find that the relation between two events can be either before or before and overlap.",
"Then the resulting annotation will have to be vague, although the annotators actually agree on the relation between t 1 start and t 2 start .",
"Using interval splitting, however, such information can be preserved.",
"An obvious downside of interval splitting is the increased number of annotations needed (4 point comparisons vs. 1 interval comparison).",
"In practice, however, it is usually much fewer than 4 comparisons.",
"For example, when we see t 1 end < t 2 start (as in Fig.",
"3) , the other three can be skipped because they can all be inferred.",
"Moreover, although the number of annotations is increased, the work load for human annotators may still be the same, because even in the conventional scheme, they still need to think of the relations between start-and end-points before they can make a decision.",
"Ambiguity of End-Points During our pilot annotation, the annotation quality dropped significantly when the annotators needed to reason about relations involving end-points of events.",
"Table 2 shows four metrics of task difficulty when only t 1 start vs. t 2 start or t 1 end vs. t 2 end are annotated.",
"Non-anchorable events were removed for both jobs.",
"The first two metrics, qualifying pass rate and survival rate are related to the two quality control protocols (see Sec.",
"4.1 for details).",
"We can see that when annotating the relations between end-points, only one out of ten crowdsourcers (11%) could successfully pass our qualifying test; and even if they had passed it, half of them (56%) would have been kicked out in the middle of the task.",
"The third line is the overall accuracy on gold set from all crowdsourcers (excluding those who did not pass the qualifying test), which drops from 67% to 37% when annotating end-end relations.",
"The last line is the average response time per annotation and we can see that it takes much longer to label an end-end TempRel (52s) than a start-start TempRel (33s).",
"This important discovery indicates that the TempRels between end-points is probably governed by a different linguistic phenomenon.",
"Table 2 : Annotations involving the end-points of events are found to be much harder than only comparing the start-points.",
"We hypothesize that the difficulty is a mixture of how durative events are expressed (by authors) and perceived (by readers) in natural language.",
"In cognitive psychology, Coll-Florit and Gennari (2011) discovered that human readers take longer to perceive durative events than punctual events, e.g., owe 50 bucks vs. lost 50 bucks.",
"From the writer's standpoint, durations are usually fuzzy (Schockaert and De Cock, 2008) , or assumed to be a prior knowledge of readers (e.g., college takes 4 years and watching an NBA game takes a few hours), and thus not always written explicitly.",
"Given all these reasons, we ignore the comparison of end-points in this work, although event duration is indeed, another important task.",
"Annotation Scheme Design To summarize, with the proposed multi-axis modeling (Sec.",
"2) and interval splitting (Sec.",
"3), our annotation scheme is two-step.",
"First, we mark every event candidate as being temporally Anchorable or not (based on the time axis we are working on).",
"Second, we adopt the dense annotation scheme to label TempRels only between Anchorable events.",
"Note that we only work on verb events in this paper, so non-verb event candidates are also deleted in a preprocessing step.",
"We design crowdsourcing tasks for both steps and as we show later, high crowdsourcing quality was achieved on both tasks.",
"In this section, we will discuss some practical issues.",
"Quality Control for Crowdsourcing We take advantage of the quality control feature in CrowdFlower in our crowdsourcing jobs.",
"For any job, a set of examples are annotated by experts beforehand, which is considered gold and will serve two purposes.",
"(i) Qualifying test: Any crowdsourcer who wants to work on this job has to pass with 70% accuracy on 10 questions randomly selected from the gold set.",
"(ii) Surviving test: During the annotation process, questions from the gold set will be randomly given to crowdsourcers without notice, and one has to maintain 70% accuracy on the gold set till the end of the annotation; otherwise, he or she will be forbidden from working on this job anymore and all his/her annotations will be discarded.",
"At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision.",
"Vague Relations How to handle vague relations is another issue in temporal annotation.",
"In non-dense schemes, annotators usually skip the annotation of a vague pair.",
"In dense schemes, a majority agreement rule is applied as a postprocessing step to back off a decision to vague when annotators cannot pass a majority vote , which reminds us that annotators often label a vague relation as non-vague due to lack of thinking.",
"We decide to proactively reduce the possibility of such situations.",
"As mentioned earlier, our label set for t 1 start vs. t 2 start is before, after, equal and vague.",
"We ask two questions: Q1=Is it possible that t 1 start is before t 2 start ?",
"Q2=Is it possible that t 2 start is before t 1 start ?",
"Let the an-swers be A1 and A2.",
"Then we have a oneto-one mapping as follows: A1=A2=yes7 !vague, A1=A2=no7 !equal, A1=yes, A2=no7 !before, and A1=no, A2=yes7 !after.",
"An advantage is that one will be prompted to think about all possibilities, thus reducing the chance of overlook.",
"Finally, the annotation interface we used is shown in Appendix C. Corpus Statistics and Quality In this section, we first focus on annotations on the main axis, which is usually the primary storyline and thus has most events.",
"Before launching the crowdsourcing tasks, we checked the IAA between two experts on a subset of TB-Dense (about 100 events and 400 relations).",
"A Cohen's Kappa of .85 was achieved in the first step: anchorability annotation.",
"Only those events that both experts labeled Anchorable were kept before they moved onto the second step: relation annotation, for which the Cohen's Kappa was .90 for Q1 and .87 for Q2.",
"Table 3 furthermore shows the distribution, Cohen's Kappa, and F 1 of each label.",
"We can see the Kappa and F 1 of vague (=.75, F 1 =.81) are generally lower than those of the other labels, confirming that temporal vagueness is a more difficult semantic phenomenon.",
"Nevertheless, the overall IAA shown in Table 3 is a significant improvement compared to existing datasets.",
"With the improved IAA confirmed by experts, we sequentially launched the two-step crowdsourcing tasks through CrowdFlower on top of the same 36 documents of TB-Dense.",
"To evaluate how well the crowdsourcers performed on our task, we calculate two quality metrics: accuracy on the gold set and the Worker Agreement with Aggregate (WAWA).",
"WAWA indicates the average number of crowdsourcers' responses agreed with the aggregate answer (we used majority aggregation for each question).",
"For example, if N individual responses were obtained in total, and n of them were correct when compared to the aggregate answer, then WAWA is simply n/N .",
"In the first step, crowdsourcers labeled 28% of the events as Non-Anchorable to the main axis, with an accuracy on the gold of .86 and a WAWA of .79.",
"With Non-Anchorable events filtered, the relation annotation step was launched as another crowdsourcing task.",
"The label distribution is b=.",
"50, a=.28, e=.03, and v=.19 (consistent with Table 3 ).",
"In Table 4 , we show the annotation quality of this step using accuracy on the gold set and WAWA.",
"We can see that the crowdsourcers achieved a very good performance on the gold set, indicating that they are consistent with the authors who created the gold set; these crowdsourcers also achieved a high-level agreement under the WAWA metric, indicating that they are consistent among themselves.",
"These two metrics indicate that the annotation task is now well-defined and easy to understand even by non-experts.",
"Table 4 : Quality analysis of the relation annotation step of MATRES.",
"\"Q1\" and \"Q2\" refer to the two questions crowdsourcers were asked (see Sec.",
"4.2 for details).",
"Line 1 measures the level of consistency between crowdsourcers and the authors and line 2 measures the level of consistency among the crowdsourcers themselves.",
"We continued to annotate INTENTION and OPINION which create orthogonal branches on the main axis.",
"In the first step, crowdsourcers achieved an accuracy on gold of .82 and a WAWA of .89.",
"Since only 16% of the events are in this category and these axes are usually very short (e.g., allocate funds to build a museum.",
"), the annotation task is relatively small and two experts took the second step and achieved an agreement of .86 (F 1 ).",
"We name our new dataset MATRES for Multi-Axis Temporal RElations for Start-points.",
"Each individual judgement cost us $0.01 and MATRES in total cost about $400 for 36 documents.",
"Comparison to TB-Dense To get another checkpoint of the quality of the new dataset, we compare with the annotations of TB-Dense.",
"TB-Dense has 1.1K verb events, between which 3.4K event-event (EE) relations are annotated.",
"In the new dataset, 72% of the events (0.8K) are anchored onto the main axis, resulting in 1.6K EE relations, and 16% (0.2K) are anchored onto orthogonal axes, resulting in 0.2K EE relations.",
"The following comparison is based on the 1.8K EE relations in common.",
"Moreover, since TB-Dense annotations are for intervals instead of start-points only, we converted TB-Dense's interval relations to start-point relations (e.g., if A includes B, then t A start is before t B start ).",
"The confusion matrix is shown in Table 5 .",
"A few remarks about how to understand it: First, when TB-Dense labels before or after, MATRES also has a high-probability of having the same label (b=455/513=.89, a=309/438=.71); when MATRES labels vague, TB-Dense is also very likely to label vague (v=192/312=.62).",
"This indicates the high agreement level between the two datasets if the interval-or point-based annotation difference is ruled out.",
"Second, many vague relations in TB-Dense are labeled as before, after or equal in MATRES.",
"This is expected because TB-Dense annotates relations between intervals, while MATRES annotates start-points.",
"When durative events are involved, the problem usually becomes more difficult and interval-based annotation is more likely to label vague (see earlier discussions in Sec.",
"3).",
"Example 7 shows three typical cases, where e14:became, e17:backed, e18:rose and e19:extending can be considered durative.",
"If only their start-points are considered, the crowdsourcers were correct in labeling e14 before e15, e16 after e17, and e18 equal to e19, although TB-Dense says vague for all of them.",
"Third, equal seems to be the relation that the two dataset mostly disagree on, which is probably due to crowdsourcers' lack of understanding in time granularity and event coreference.",
"Although equal relations only constitutes a small portion in all relations, it needs further investigation.",
"Baseline System We develop a baseline system for TempRel extraction on MATRES, assuming that all the events and axes are given.",
"The following commonly-Example 7: Typical cases that TB-Dense annotated vague but MATRES annotated before, after, and equal, respectively.",
"At one point , when it (e14:became) clear controllers could not contact the plane, someone (e15:said) a prayer.",
"TB-Dense: vague; MATRES: before The US is bolstering its military presence in the gulf, as President Clinton (e16:discussed) the Iraq crisis with the one ally who has (e17:backed) his threat of force, British prime minister Tony Blair.",
"TB-Dense: vague; MATRES: after Average hourly earnings of nonsupervisory employees (e18:rose) to $12.51.",
"The gain left wages 3.8 percent higher than a year earlier, (e19:extending) a trend that has given back to workers some of the earning power they lost to inflation in the last decade.",
"TB-Dense: vague; MATRES: equal used features for each event pair are used: (i) The part-of-speech (POS) tags of each individual event and of its neighboring three words.",
"(ii) The sentence and token distance between the two events.",
"(iii) The appearance of any modal verb between the two event mentions in text (i.e., will, would, can, could, may and might).",
"(iv) The appearance of any temporal connectives between the two event mentions (e.g., before, after and since).",
"(v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998) .",
"(vi) Whether the input event mentions have a common derivational form derived from WordNet.",
"(vii) The head words of the preposition phrases that cover each event, respectively.",
"And (viii) event properties such as Aspect, Modality, and Polarity that come with the TimeBank dataset and are commonly used as features.",
"The proposed baseline system uses the averaged perceptron algorithm to classify the relation between each event pair into one of the four relation types.",
"We adopted the same train/dev/test split of TB-Dense, where there are 22 documents in train, 5 in dev, and 9 in test.",
"Parameters were tuned on the train-set to maximize its F 1 on the dev-set, after which the classifier was retrained on the union of train and dev.",
"A detailed analysis of the baseline system is provided in Table 6 .",
"The performance on equal and vague is lower than on before and after, probably due to shortage in these labels in the training data and the inherent difficulty in event coreference and temporal vagueness.",
"We can see, though, that the overall performance on MATRES is much better than those in the literature for TempRel extraction, which used to be in the low 50's Ning et al., 2017 ).",
"The same system was also retrained and tested on the original annotations of TB-Dense (Line \"Original\"), which confirms the significant improvement if the proposed annotation scheme is used.",
"Note that we do not mean to say that the proposed baseline system itself is better than other existing algorithms, but rather that the proposed annotation scheme and the resulting dataset lead to better defined machine learning tasks.",
"In the future, more data can be collected and used with advanced techniques such as ILP (Do et al., 2012) , structured learning (Ning et al., 2017) or multi-sieve .",
"Table 6 : Performance of the proposed baseline system on MATRES.",
"Line \"Original\" is the same system retrained on the original TB-Dense and tested on the same subset of event pairs.",
"Due to the limited number of equal examples, the system did not make any equal predictions on the testset.",
"Conclusion This paper proposes a new scheme for TempRel annotation between events, simplifying the task by focusing on a single time axis at a time.",
"We have also identified that end-points of events is a major source of confusion during annotation due to reasons beyond the scope of TempRel annotation, and proposed to focus on start-points only and handle the end-points issue in further investigation (e.g., in event duration annotation tasks).",
"Pilot study by expert annotators shows significant IAA improvements compared to literature values, indicating a better task definition under the proposed scheme.",
"This further enables the usage of crowdsourcing to collect a new dataset, MATRES, at a lower time cost.",
"Analysis shows that MATRES, albeit crowdsourced, has achieved a reasonably good agreement level, as confirmed by its performance on the gold set (agreement with the authors), the WAWA metric (agreement with the crowdsourcers themselves), and consistency with TB-Dense (agreement with an existing dataset).",
"Given the fact that existing schemes suffer from low IAAs and lack of data, we hope that the findings in this work would provide a good start towards understanding more sophisticated semantic phenomena in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.3.1",
"2.3.2",
"2.3.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Temporal Structure of Events",
"Motivation",
"Multi-Axis Modeling",
"Comparisons with Existing Work",
"Axis Projection",
"Introduction of the Orthogonal Axes",
"Differences from Factuality",
"Interval Splitting",
"Ambiguity of End-Points",
"Annotation Scheme Design",
"Quality Control for Crowdsourcing",
"Vague Relations",
"Corpus Statistics and Quality",
"Comparison to TB-Dense",
"Baseline System",
"Conclusion"
]
} | GEM-SciDuet-train-123#paper-1336#slide-6 | 1 temporal structure modeling multi axis | Police tried to eliminate the pro-independence army and restore order. At least 51 people were killed in clashes between police and citizens in the troubled region.
We suggest that while time is 1-dimensional in the physical world, multiple temporal axes may exist in natural language.
police tried 51 people killed | Police tried to eliminate the pro-independence army and restore order. At least 51 people were killed in clashes between police and citizens in the troubled region.
We suggest that while time is 1-dimensional in the physical world, multiple temporal axes may exist in natural language.
police tried 51 people killed | [] |
GEM-SciDuet-train-123#paper-1336#slide-7 | 1336 | A Multi-Axis Annotation Scheme for Event Temporal Relations | Existing temporal relation (TempRel) annotation schemes often have low interannotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition. This paper proposes a new multi-axis modeling to better capture the temporal structure of events. In addition, we identify that event end-points are a major source of confusion in annotation, so we also propose to annotate TempRels based on start-points only. A pilot expert annotation effort using the proposed scheme shows significant improvement in IAA from the conventional 60's to 80's (Cohen's Kappa). This better-defined annotation scheme further enables the use of crowdsourcing to alleviate the labor intensity for each annotator. We hope that this work can foster more interesting studies towards event understanding. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259
],
"paper_content_text": [
"Introduction Temporal relation (TempRel) extraction is an important task for event understanding, and it has drawn much attention in the natural language processing (NLP) community recently (UzZaman et al., 2013; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015 Bethard et al., , 2016 Bethard et al., , 2017 Leeuwenberg and Moens, 2017; Ning et al., 2017 Ning et al., , 2018a .",
"Initiated by TimeBank (TB) (Pustejovsky et al., 2003b) , a number of TempRel datasets have been collected, including but not limited to the verbclause augmentation to TB (Bethard et al., 2007) , TempEval1-3 (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , TimeBank-Dense (TB-Dense) , EventTimeCorpus (Reimers et al., 2016) , and datasets with both temporal and other types of relations (e.g., coreference and causality) such as CaTeRs (Mostafazadeh et al., 2016) and RED (O'Gorman et al., 2016) .",
"These datasets were annotated by experts, but most still suffered from low inter-annotator agreements (IAA).",
"For instance, the IAAs of TB-Dense, RED and THYME-TimeML (Styler IV et al., 2014) were only below or near 60% (given that events are already annotated).",
"Since a low IAA usually indicates that the task is difficult even for humans (see Examples 1-3), the community has been looking into ways to simplify the task, by reducing the label set, and by breaking up the overall, complex task into subtasks (e.g., getting agreement on which event pairs should have a relation, and then what that relation should be) (Mostafazadeh et al., 2016; O'Gorman et al., 2016) .",
"In contrast to other existing datasets, Bethard et al.",
"(2007) achieved an agreement as high as 90%, but the scope of its annotation was narrowed down to a very special verb-clause structure.",
"(e1, e2), (e3, e4), and (e5, e6): TempRels that are difficult even for humans.",
"Note that only relevant events are highlighted here.",
"Example 1: Serbian police tried to eliminate the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Example 2: Service industries (e3:showed) solid job gains, as did manufacturers, two areas expected to be hardest (e4:hit) when the effects of the Asian crisis hit the American economy.",
"Example 3: We will act again if we have evidence he is (e5:rebuilding) his weapons of mass destruction capabilities, senior officials say.",
"In a bit of television diplomacy, Iraq's deputy foreign minister (e6:responded) from Baghdad in less than one hour, saying that .",
".",
".",
"This paper proposes a new approach to handling these issues in TempRel annotation.",
"First, we introduce multi-axis modeling to represent the temporal structure of events, based on which we anchor events to different semantic axes; only events from the same axis will then be temporally compared (Sec.",
"2).",
"As explained later, those event pairs in Examples 1-3 are difficult because they represent different semantic phenomena and belong to different axes.",
"Second, while we represent an event pair using two time intervals (say, [t 1 start , t 1 end ] and [t 2 start , t 2 end ]), we suggest that comparisons involving end-points (e.g., t 1 end vs. t 2 end ) are typically more difficult than comparing start-points (i.e., t 1 start vs. t 2 start ); we attribute this to the ambiguity of expressing and perceiving durations of events (Coll-Florit and Gennari, 2011) .",
"We believe that this is an important consideration, and we propose in Sec.",
"3 that TempRel annotation should focus on start-points.",
"Using the proposed annotation scheme, a pilot study done by experts achieved a high IAA of .84 (Cohen's Kappa) on a subset of TB-Dense, in contrast to the conventional 60's.",
"In addition to the low IAA issue, TempRel annotation is also known to be labor intensive.",
"Our third contribution is that we facilitate, for the first time, the use of crowdsourcing to collect a new, high quality (under multiple metrics explained later) TempRel dataset.",
"We explain how the crowdsourcing quality was controlled and how vague relations were handled in Sec.",
"4, and present some statistics and the quality of the new dataset in Sec.",
"5.",
"A baseline system is also shown to achieve much better performance on the new dataset, when compared with system performance in the literature (Sec.",
"6).",
"The paper's results are very encouraging and hopefully, this work would significantly benefit research in this area.",
"Temporal Structure of Events Given a set of events, one important question in designing the TempRel annotation task is: which pairs of events should have a relation?",
"The answer to it depends on the modeling of the overall temporal structure of events.",
"Motivation TimeBank (Pustejovsky et al., 2003b) laid the foundation for many later TempRel corpora, e.g., (Bethard et al., 2007; UzZaman et al., 2013; Cas-sidy et al., 2014) .",
"2 In TimeBank, the annotators were allowed to label TempRels between any pairs of events.",
"This setup models the overall structure of events using a general graph, which made annotators inadvertently overlook some pairs, resulting in low IAAs and many false negatives.",
"Example 4: Dense Annotation Scheme.",
"Serbian police (e7:tried) to (e8:eliminate) the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Given 4 NON-GENERIC events above, the dense scheme presents 6 pairs to annotators one by one: (e7, e8), (e7, e1), (e7, e2), (e8, e1), (e8, e2), and (e1, e2).",
"Apparently, not all pairs are well-defined, e.g., (e8, e2) and (e1, e2), but annotators are forced to label all of them.",
"To address this issue, proposed a dense annotation scheme, TB-Dense, which annotates all event pairs within a sliding, two-sentence window (see Example 4).",
"It requires all TempRels between GENERIC 3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NON-GENERIC and the other one for the GENERIC.",
"However, as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts.",
"It is reasonable to argue that the attempt to e1:restore order happened before the conflict where 51 people were e2:killed; or, 51 people had been killed but order had not been restored yet, so e1:restore is after e2:killed.",
"Similarly, in Example 2, service industries and manufacturers were originally expected to be hardest e4:hit but actually e3:showed gains, so e4:hit is before e3:showed; however, one can also argue that the two areas had showed gains but had not been hit, so e4:hit is after e3:showed.",
"Again, e5:rebuilding is a hypothetical event: \"we will act if rebuilding is true\".",
"Readers do not know for sure if \"he is already rebuilding weapons but we have no evidence\", or \"he will be building weapons in the future\", so annotators may disagree on the relation between e5:rebuilding and e6:responded.",
"Despite, importantly, minimizing missing annota-2 EventTimeCorpus (Reimers et al., 2016) is based on TimeBank, but aims at anchoring events onto explicit time expressions in each document rather than annotating TempRels between events, which can be a good complementary to other TempRel datasets.",
"3 For example, lions eat meat is GENERIC.",
"tions, the current dense scheme forces annotators to label many such ill-defined pairs, resulting in low IAA.",
"Multi-Axis Modeling Arguably, an ideal annotator may figure out the above ambiguity by him/herself and mark them as vague, but it is not a feasible requirement for all annotators to stay clear-headed for hours; let alone crowdsourcers.",
"What makes things worse is that, after annotators spend a long time figuring out these difficult cases, whether they disagree with each other or agree on the vagueness, the final decisions for such cases will still be vague.",
"As another way to handle this dilemma, TB-Dense resorted to a 80% confidence rule: annotators were allowed to choose a label if one is 80% sure that it was the writer's intent.",
"However, as pointed out by TB-Dense, annotators are likely to have rather different understandings of 80% confidence and it will still end up with disagreements.",
"In contrast to these annotation difficulties, humans can easily grasp the meaning of news articles, implying a potential gap between the difficulty of the annotation task and the one of understanding the actual meaning of the text.",
"In Examples 1-3, the writers did not intend to explain the TempRels between those pairs, and the original annotators of TimeBank 4 did not label relations between those pairs either, which indicates that both writers and readers did not think the TempRels between these pairs were crucial.",
"Instead, what is crucial in these examples is that \"Serbian police tried to restore order but killed 51 people\", that \"two areas were expected to be hit but showed gains\", and that \"if he rebuilds weapons then we will act.\"",
"To \"restore order\", to be \"hardest hit\", and \"if he was rebuilding\" were only the intention of police, the opinion of economists, and the condition to act, respectively, and whether or not they actually happen is not the focus of those writers.",
"This discussion suggests that a single axis is too restrictive to represent the complex structure of NON-GENERIC events.",
"Instead, we need a modeling which is more restrictive than a general graph so that annotators can focus on relation annotation (rather than looking for pairs first), but also more flexible than a single axis so that ill-defined 4 Recall that they were given the entire article and only salient relations would be annotated.",
"relations are not forcibly annotated.",
"Specifically, we need axes for intentions, opinions, hypotheses, etc.",
"in addition to the main axis of an article.",
"We thus argue for multi-axis modeling, as defined in Table 1 .",
"Following the proposed modeling, Examples 1-3 can be represented as in Fig.",
"1 .",
"This modeling aims at capturing what the author has explicitly expressed and it only asks annotators to look at comparable pairs, rather than forcing them to make decisions on often vaguely defined pairs.",
"In practice, we annotate one axis at a time: we first classify if an event is anchorable onto a given axis (this is also called the anchorability annotation step); then we annotate every pair of anchorable events (i.e., the relation annotation step); finally, we can move to another axis and repeat the two steps above.",
"Note that ruling out cross-axis relations is only a strategy we adopt in this paper to separate well-defined relations from ill-defined relations.",
"We do not claim that cross-axis relations are unimportant; instead, as shown in Fig.",
"2 , we think that cross-axis relations are a different semantic phenomenon that requires additional investigation.",
"Comparisons with Existing Work There have been other proposals of temporal structure modelings (Bramsen et al., 2006; Bethard et al., 2012) , but in general, the semantic phenomena handled in our work are very different and complementary to them.",
"(Bramsen et al., 2006) introduces \"temporal segments\" (a fragment of text that does not exhibit abrupt changes) in the medical domain.",
"Similarly, their temporal segments can also be considered as a special temporal structure modeling.",
"But a key difference is that (Bramsen et al., 2006) only annotates inter-segment relations, ignoring intra-segment ones.",
"Since those segments are usually large chunks of text, the semantics handled in (Bramsen et al., 2006) is in a very coarse granularity (as pointed out by (Bramsen et al., 2006) ) and is thus different from ours.",
"(Bethard et al., 2012) proposes a tree structure for children's stories, which \"typically have simpler temporal structures\", as they pointed out.",
"Moreover, in their annotation, an event can only be linked to a single nearby event, even if multiple nearby events may exist, whereas we do not have such restrictions.",
"In addition, some of the semantic phenomena in Table 1 have been discussed in existing work.",
"Here we compare with them for a better positioning of the proposed scheme.",
"Axis Projection TB-Dense handled the incomparability between main-axis events and HYPOTHESIS/NEGATION by treating an event as having occurred if the event is HYPOTHESIS/NEGATION.",
"5 In our multiaxis modeling, the strategy adopted by TB-Dense falls into a more general approach, \"axis projection\".",
"That is, projecting events across different axes to handle the incomparability between any two axes (not limited to HYPOTHE-SIS/NEGATION).",
"Axis projection works well for certain event pairs like Asian crisis and e4:hardest hit in Example 2: as in Fig.",
"1 , Asian crisis is before expected, which is again before e4:hardest hit, so Asian crisis is before e4:hardest hit.",
"Generally, however, since there is no direct evidence that can guide the projection, annotators may have different projections (imagine projecting e5:rebuilding onto the main axis: is it in the past or in the future?).",
"As a result, axis projec-tion requires many specially designed guidelines or strong external knowledge.",
"Annotators have to rigidly follow the sometimes counter-intuitive guidelines or \"guess\" a label instead of looking for evidence in the text.",
"When strong external knowledge is involved in axis projection, it becomes a reasoning process and the resulting relations are a different type.",
"For example, a reader may reason that in Example 3, it is well-known that they did \"act again\", implying his e5:rebuilding had happened and is before e6:responded.",
"Another example is in Fig.",
"2 .",
"It is obvious that relations based on these projections are not the same with and more challenging than those same-axis relations, so in the current stage, we should focus on same-axis relations only.",
"tended the conference, the projection of submit a paper onto the main axis is clearly before attended.",
"However, this projection requires strong external knowledge that a paper should be submitted before attending a conference.",
"Again, this projection is only a guess based on our external knowledge and it is still open whether the paper is submitted or not.",
"Introduction of the Orthogonal Axes Another prominent difference to earlier work is the introduction of orthogonal axes, which has not been used in any existing work as we know.",
"A special property is that the intersection event of two axes can be compared to events from both, which can sometimes bridge events, e.g., in Fig.",
"1 , Asian crisis is seemingly before hardest hit due to their connections to expected.",
"Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well.",
"However, the \"hardest hit\" in \"Asian crisis before hardest hit\" is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true.",
"Nevertheless, OPINIONS are not always true and INTENTIONS are not always fulfilled.",
"In Example 5, e9:sponsoring and e10:resolve are the opinions of the West and the speaker, respectively; whether or not they are true depends on the au-thors' implications or the readers' understandings, which is often beyond the scope of TempRel annotation.",
"6 Example 6 demonstrates a similar situation for INTENTIONS: when reading the sentence of e11:report, people are inclined to believe that it is fulfilled.",
"But if we read the sentence of e12:report, we have reason to believe that it is not.",
"When it comes to e13:tell, it is unclear if everyone told the truth.",
"The existence of such examples indicates that orthogonal axes are a better modeling for INTENTIONS and OPINIONS.",
"Example 5: Opinion events may not always be true.",
"He is ostracized by the West for (e9:sponsoring) terrorism.",
"We need to (e10:resolve) the deep-seated causes that have resulted in these problems.",
"Example 6: Intentions may not always be fulfilled.",
"A passerby called the police to (e11:report) the body.",
"A passerby called the police to (e12:report) the body.",
"Unfortunately, the line was busy.",
"I asked everyone to (e13:tell) the truth.",
"Differences from Factuality Event modality have been discussed in many existing event annotation schemes, e.g., Event Nugget (Mitamura et al., 2015) , Rich ERE , and RED.",
"Generally, an event is classified as Actual or Non-Actual, a.k.a.",
"factuality (Saurí and Pustejovsky, 2009; Lee et al., 2015) .",
"The main-axis events defined in this paper seem to be very similar to Actual events, but with several important differences: First, future events are Non-Actual because they indeed have not happened, but they may be on the main axis.",
"Second, events that are not on the main axis can also be Actual events, e.g., intentions that are fulfilled, or opinions that are true.",
"Third, as demonstrated by Examples 5-6, identifying anchorability as defined in Table 1 is relatively easy, but judging if an event actually happened is often a high-level understanding task that requires an understanding of the entire document or external knowledge.",
"Interested readers are referred to Appendix B for a detailed analysis of the difference between Anchorable (onto the main axis) and Actual on a subset of RED.",
"Interval Splitting All existing annotation schemes adopt the interval representation of events (Allen, 1984) and there are 13 relations between two intervals (for readers who are not familiar with it, please see Fig.",
"4 in the appendix).",
"To reduce the burden of annotators, existing schemes often resort to a reduced set of the 13 relations.",
"For instance, Verhagen et al.",
"(2007) Let [t 1 start , t 1 end ] and [t 2 start , t 2 end ] be the time intervals of two events (with the implicit assumption that t start t end ).",
"Instead of reducing the relations between two intervals, we try to explicitly compare the time points (see Fig.",
"3 ).",
"In this way, the label set is simply before, after and equal, 7 while the expressivity remains the same.",
"This interval splitting technique has also been used in (Raghavan et al., 2012) .",
"Figure 3 : The comparison of two event time intervals, [t 1 start , t 1 end ] and [t 2 start , t 2 end ], can be decomposed into four comparisons t 1 start vs. t 2 start , t 1 start vs. t 2 end , t 1 end vs. t 2 start , and t 1 end vs. t 2 end , without loss of generality.",
"[!",
"\"#$%# & , !",
"()* & ] [!",
"\"#$%# + , !",
"()* + ] time In addition to same expressivity, interval splitting can provide even more information when the relation between two events is vague.",
"In the conventional setting, imagine that the annotators find that the relation between two events can be either before or before and overlap.",
"Then the resulting annotation will have to be vague, although the annotators actually agree on the relation between t 1 start and t 2 start .",
"Using interval splitting, however, such information can be preserved.",
"An obvious downside of interval splitting is the increased number of annotations needed (4 point comparisons vs. 1 interval comparison).",
"In practice, however, it is usually much fewer than 4 comparisons.",
"For example, when we see t 1 end < t 2 start (as in Fig.",
"3) , the other three can be skipped because they can all be inferred.",
"Moreover, although the number of annotations is increased, the work load for human annotators may still be the same, because even in the conventional scheme, they still need to think of the relations between start-and end-points before they can make a decision.",
"Ambiguity of End-Points During our pilot annotation, the annotation quality dropped significantly when the annotators needed to reason about relations involving end-points of events.",
"Table 2 shows four metrics of task difficulty when only t 1 start vs. t 2 start or t 1 end vs. t 2 end are annotated.",
"Non-anchorable events were removed for both jobs.",
"The first two metrics, qualifying pass rate and survival rate are related to the two quality control protocols (see Sec.",
"4.1 for details).",
"We can see that when annotating the relations between end-points, only one out of ten crowdsourcers (11%) could successfully pass our qualifying test; and even if they had passed it, half of them (56%) would have been kicked out in the middle of the task.",
"The third line is the overall accuracy on gold set from all crowdsourcers (excluding those who did not pass the qualifying test), which drops from 67% to 37% when annotating end-end relations.",
"The last line is the average response time per annotation and we can see that it takes much longer to label an end-end TempRel (52s) than a start-start TempRel (33s).",
"This important discovery indicates that the TempRels between end-points is probably governed by a different linguistic phenomenon.",
"Table 2 : Annotations involving the end-points of events are found to be much harder than only comparing the start-points.",
"We hypothesize that the difficulty is a mixture of how durative events are expressed (by authors) and perceived (by readers) in natural language.",
"In cognitive psychology, Coll-Florit and Gennari (2011) discovered that human readers take longer to perceive durative events than punctual events, e.g., owe 50 bucks vs. lost 50 bucks.",
"From the writer's standpoint, durations are usually fuzzy (Schockaert and De Cock, 2008) , or assumed to be a prior knowledge of readers (e.g., college takes 4 years and watching an NBA game takes a few hours), and thus not always written explicitly.",
"Given all these reasons, we ignore the comparison of end-points in this work, although event duration is indeed, another important task.",
"Annotation Scheme Design To summarize, with the proposed multi-axis modeling (Sec.",
"2) and interval splitting (Sec.",
"3), our annotation scheme is two-step.",
"First, we mark every event candidate as being temporally Anchorable or not (based on the time axis we are working on).",
"Second, we adopt the dense annotation scheme to label TempRels only between Anchorable events.",
"Note that we only work on verb events in this paper, so non-verb event candidates are also deleted in a preprocessing step.",
"We design crowdsourcing tasks for both steps and as we show later, high crowdsourcing quality was achieved on both tasks.",
"In this section, we will discuss some practical issues.",
"Quality Control for Crowdsourcing We take advantage of the quality control feature in CrowdFlower in our crowdsourcing jobs.",
"For any job, a set of examples are annotated by experts beforehand, which is considered gold and will serve two purposes.",
"(i) Qualifying test: Any crowdsourcer who wants to work on this job has to pass with 70% accuracy on 10 questions randomly selected from the gold set.",
"(ii) Surviving test: During the annotation process, questions from the gold set will be randomly given to crowdsourcers without notice, and one has to maintain 70% accuracy on the gold set till the end of the annotation; otherwise, he or she will be forbidden from working on this job anymore and all his/her annotations will be discarded.",
"At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision.",
"Vague Relations How to handle vague relations is another issue in temporal annotation.",
"In non-dense schemes, annotators usually skip the annotation of a vague pair.",
"In dense schemes, a majority agreement rule is applied as a postprocessing step to back off a decision to vague when annotators cannot pass a majority vote , which reminds us that annotators often label a vague relation as non-vague due to lack of thinking.",
"We decide to proactively reduce the possibility of such situations.",
"As mentioned earlier, our label set for t 1 start vs. t 2 start is before, after, equal and vague.",
"We ask two questions: Q1=Is it possible that t 1 start is before t 2 start ?",
"Q2=Is it possible that t 2 start is before t 1 start ?",
"Let the an-swers be A1 and A2.",
"Then we have a oneto-one mapping as follows: A1=A2=yes7 !vague, A1=A2=no7 !equal, A1=yes, A2=no7 !before, and A1=no, A2=yes7 !after.",
"An advantage is that one will be prompted to think about all possibilities, thus reducing the chance of overlook.",
"Finally, the annotation interface we used is shown in Appendix C. Corpus Statistics and Quality In this section, we first focus on annotations on the main axis, which is usually the primary storyline and thus has most events.",
"Before launching the crowdsourcing tasks, we checked the IAA between two experts on a subset of TB-Dense (about 100 events and 400 relations).",
"A Cohen's Kappa of .85 was achieved in the first step: anchorability annotation.",
"Only those events that both experts labeled Anchorable were kept before they moved onto the second step: relation annotation, for which the Cohen's Kappa was .90 for Q1 and .87 for Q2.",
"Table 3 furthermore shows the distribution, Cohen's Kappa, and F 1 of each label.",
"We can see the Kappa and F 1 of vague (=.75, F 1 =.81) are generally lower than those of the other labels, confirming that temporal vagueness is a more difficult semantic phenomenon.",
"Nevertheless, the overall IAA shown in Table 3 is a significant improvement compared to existing datasets.",
"With the improved IAA confirmed by experts, we sequentially launched the two-step crowdsourcing tasks through CrowdFlower on top of the same 36 documents of TB-Dense.",
"To evaluate how well the crowdsourcers performed on our task, we calculate two quality metrics: accuracy on the gold set and the Worker Agreement with Aggregate (WAWA).",
"WAWA indicates the average number of crowdsourcers' responses agreed with the aggregate answer (we used majority aggregation for each question).",
"For example, if N individual responses were obtained in total, and n of them were correct when compared to the aggregate answer, then WAWA is simply n/N .",
"In the first step, crowdsourcers labeled 28% of the events as Non-Anchorable to the main axis, with an accuracy on the gold of .86 and a WAWA of .79.",
"With Non-Anchorable events filtered, the relation annotation step was launched as another crowdsourcing task.",
"The label distribution is b=.",
"50, a=.28, e=.03, and v=.19 (consistent with Table 3 ).",
"In Table 4 , we show the annotation quality of this step using accuracy on the gold set and WAWA.",
"We can see that the crowdsourcers achieved a very good performance on the gold set, indicating that they are consistent with the authors who created the gold set; these crowdsourcers also achieved a high-level agreement under the WAWA metric, indicating that they are consistent among themselves.",
"These two metrics indicate that the annotation task is now well-defined and easy to understand even by non-experts.",
"Table 4 : Quality analysis of the relation annotation step of MATRES.",
"\"Q1\" and \"Q2\" refer to the two questions crowdsourcers were asked (see Sec.",
"4.2 for details).",
"Line 1 measures the level of consistency between crowdsourcers and the authors and line 2 measures the level of consistency among the crowdsourcers themselves.",
"We continued to annotate INTENTION and OPINION which create orthogonal branches on the main axis.",
"In the first step, crowdsourcers achieved an accuracy on gold of .82 and a WAWA of .89.",
"Since only 16% of the events are in this category and these axes are usually very short (e.g., allocate funds to build a museum.",
"), the annotation task is relatively small and two experts took the second step and achieved an agreement of .86 (F 1 ).",
"We name our new dataset MATRES for Multi-Axis Temporal RElations for Start-points.",
"Each individual judgement cost us $0.01 and MATRES in total cost about $400 for 36 documents.",
"Comparison to TB-Dense To get another checkpoint of the quality of the new dataset, we compare with the annotations of TB-Dense.",
"TB-Dense has 1.1K verb events, between which 3.4K event-event (EE) relations are annotated.",
"In the new dataset, 72% of the events (0.8K) are anchored onto the main axis, resulting in 1.6K EE relations, and 16% (0.2K) are anchored onto orthogonal axes, resulting in 0.2K EE relations.",
"The following comparison is based on the 1.8K EE relations in common.",
"Moreover, since TB-Dense annotations are for intervals instead of start-points only, we converted TB-Dense's interval relations to start-point relations (e.g., if A includes B, then t A start is before t B start ).",
"The confusion matrix is shown in Table 5 .",
"A few remarks about how to understand it: First, when TB-Dense labels before or after, MATRES also has a high-probability of having the same label (b=455/513=.89, a=309/438=.71); when MATRES labels vague, TB-Dense is also very likely to label vague (v=192/312=.62).",
"This indicates the high agreement level between the two datasets if the interval-or point-based annotation difference is ruled out.",
"Second, many vague relations in TB-Dense are labeled as before, after or equal in MATRES.",
"This is expected because TB-Dense annotates relations between intervals, while MATRES annotates start-points.",
"When durative events are involved, the problem usually becomes more difficult and interval-based annotation is more likely to label vague (see earlier discussions in Sec.",
"3).",
"Example 7 shows three typical cases, where e14:became, e17:backed, e18:rose and e19:extending can be considered durative.",
"If only their start-points are considered, the crowdsourcers were correct in labeling e14 before e15, e16 after e17, and e18 equal to e19, although TB-Dense says vague for all of them.",
"Third, equal seems to be the relation that the two dataset mostly disagree on, which is probably due to crowdsourcers' lack of understanding in time granularity and event coreference.",
"Although equal relations only constitutes a small portion in all relations, it needs further investigation.",
"Baseline System We develop a baseline system for TempRel extraction on MATRES, assuming that all the events and axes are given.",
"The following commonly-Example 7: Typical cases that TB-Dense annotated vague but MATRES annotated before, after, and equal, respectively.",
"At one point , when it (e14:became) clear controllers could not contact the plane, someone (e15:said) a prayer.",
"TB-Dense: vague; MATRES: before The US is bolstering its military presence in the gulf, as President Clinton (e16:discussed) the Iraq crisis with the one ally who has (e17:backed) his threat of force, British prime minister Tony Blair.",
"TB-Dense: vague; MATRES: after Average hourly earnings of nonsupervisory employees (e18:rose) to $12.51.",
"The gain left wages 3.8 percent higher than a year earlier, (e19:extending) a trend that has given back to workers some of the earning power they lost to inflation in the last decade.",
"TB-Dense: vague; MATRES: equal used features for each event pair are used: (i) The part-of-speech (POS) tags of each individual event and of its neighboring three words.",
"(ii) The sentence and token distance between the two events.",
"(iii) The appearance of any modal verb between the two event mentions in text (i.e., will, would, can, could, may and might).",
"(iv) The appearance of any temporal connectives between the two event mentions (e.g., before, after and since).",
"(v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998) .",
"(vi) Whether the input event mentions have a common derivational form derived from WordNet.",
"(vii) The head words of the preposition phrases that cover each event, respectively.",
"And (viii) event properties such as Aspect, Modality, and Polarity that come with the TimeBank dataset and are commonly used as features.",
"The proposed baseline system uses the averaged perceptron algorithm to classify the relation between each event pair into one of the four relation types.",
"We adopted the same train/dev/test split of TB-Dense, where there are 22 documents in train, 5 in dev, and 9 in test.",
"Parameters were tuned on the train-set to maximize its F 1 on the dev-set, after which the classifier was retrained on the union of train and dev.",
"A detailed analysis of the baseline system is provided in Table 6 .",
"The performance on equal and vague is lower than on before and after, probably due to shortage in these labels in the training data and the inherent difficulty in event coreference and temporal vagueness.",
"We can see, though, that the overall performance on MATRES is much better than those in the literature for TempRel extraction, which used to be in the low 50's Ning et al., 2017 ).",
"The same system was also retrained and tested on the original annotations of TB-Dense (Line \"Original\"), which confirms the significant improvement if the proposed annotation scheme is used.",
"Note that we do not mean to say that the proposed baseline system itself is better than other existing algorithms, but rather that the proposed annotation scheme and the resulting dataset lead to better defined machine learning tasks.",
"In the future, more data can be collected and used with advanced techniques such as ILP (Do et al., 2012) , structured learning (Ning et al., 2017) or multi-sieve .",
"Table 6 : Performance of the proposed baseline system on MATRES.",
"Line \"Original\" is the same system retrained on the original TB-Dense and tested on the same subset of event pairs.",
"Due to the limited number of equal examples, the system did not make any equal predictions on the testset.",
"Conclusion This paper proposes a new scheme for TempRel annotation between events, simplifying the task by focusing on a single time axis at a time.",
"We have also identified that end-points of events is a major source of confusion during annotation due to reasons beyond the scope of TempRel annotation, and proposed to focus on start-points only and handle the end-points issue in further investigation (e.g., in event duration annotation tasks).",
"Pilot study by expert annotators shows significant IAA improvements compared to literature values, indicating a better task definition under the proposed scheme.",
"This further enables the usage of crowdsourcing to collect a new dataset, MATRES, at a lower time cost.",
"Analysis shows that MATRES, albeit crowdsourced, has achieved a reasonably good agreement level, as confirmed by its performance on the gold set (agreement with the authors), the WAWA metric (agreement with the crowdsourcers themselves), and consistency with TB-Dense (agreement with an existing dataset).",
"Given the fact that existing schemes suffer from low IAAs and lack of data, we hope that the findings in this work would provide a good start towards understanding more sophisticated semantic phenomena in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.3.1",
"2.3.2",
"2.3.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Temporal Structure of Events",
"Motivation",
"Multi-Axis Modeling",
"Comparisons with Existing Work",
"Axis Projection",
"Introduction of the Orthogonal Axes",
"Differences from Factuality",
"Interval Splitting",
"Ambiguity of End-Points",
"Annotation Scheme Design",
"Quality Control for Crowdsourcing",
"Vague Relations",
"Corpus Statistics and Quality",
"Comparison to TB-Dense",
"Baseline System",
"Conclusion"
]
} | GEM-SciDuet-train-123#paper-1336#slide-7 | 1 multi axis modeling not simply actual vs non actual | Police tried to eliminate the pro-independence army and restore order. At least 51 people were killed in clashes between police and citizens in the troubled region.
Is it a non-actual event axis?-We think no.
First, tried, an actual event, is on both axes.
Second, whether restore is non-actual is questionable. Its very likely that order was indeed restored in the end.
Real world axis police tried 51 people killed | Police tried to eliminate the pro-independence army and restore order. At least 51 people were killed in clashes between police and citizens in the troubled region.
Is it a non-actual event axis?-We think no.
First, tried, an actual event, is on both axes.
Second, whether restore is non-actual is questionable. Its very likely that order was indeed restored in the end.
Real world axis police tried 51 people killed | [] |
GEM-SciDuet-train-123#paper-1336#slide-8 | 1336 | A Multi-Axis Annotation Scheme for Event Temporal Relations | Existing temporal relation (TempRel) annotation schemes often have low interannotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition. This paper proposes a new multi-axis modeling to better capture the temporal structure of events. In addition, we identify that event end-points are a major source of confusion in annotation, so we also propose to annotate TempRels based on start-points only. A pilot expert annotation effort using the proposed scheme shows significant improvement in IAA from the conventional 60's to 80's (Cohen's Kappa). This better-defined annotation scheme further enables the use of crowdsourcing to alleviate the labor intensity for each annotator. We hope that this work can foster more interesting studies towards event understanding. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259
],
"paper_content_text": [
"Introduction Temporal relation (TempRel) extraction is an important task for event understanding, and it has drawn much attention in the natural language processing (NLP) community recently (UzZaman et al., 2013; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015 Bethard et al., , 2016 Bethard et al., , 2017 Leeuwenberg and Moens, 2017; Ning et al., 2017 Ning et al., , 2018a .",
"Initiated by TimeBank (TB) (Pustejovsky et al., 2003b) , a number of TempRel datasets have been collected, including but not limited to the verbclause augmentation to TB (Bethard et al., 2007) , TempEval1-3 (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , TimeBank-Dense (TB-Dense) , EventTimeCorpus (Reimers et al., 2016) , and datasets with both temporal and other types of relations (e.g., coreference and causality) such as CaTeRs (Mostafazadeh et al., 2016) and RED (O'Gorman et al., 2016) .",
"These datasets were annotated by experts, but most still suffered from low inter-annotator agreements (IAA).",
"For instance, the IAAs of TB-Dense, RED and THYME-TimeML (Styler IV et al., 2014) were only below or near 60% (given that events are already annotated).",
"Since a low IAA usually indicates that the task is difficult even for humans (see Examples 1-3), the community has been looking into ways to simplify the task, by reducing the label set, and by breaking up the overall, complex task into subtasks (e.g., getting agreement on which event pairs should have a relation, and then what that relation should be) (Mostafazadeh et al., 2016; O'Gorman et al., 2016) .",
"In contrast to other existing datasets, Bethard et al.",
"(2007) achieved an agreement as high as 90%, but the scope of its annotation was narrowed down to a very special verb-clause structure.",
"(e1, e2), (e3, e4), and (e5, e6): TempRels that are difficult even for humans.",
"Note that only relevant events are highlighted here.",
"Example 1: Serbian police tried to eliminate the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Example 2: Service industries (e3:showed) solid job gains, as did manufacturers, two areas expected to be hardest (e4:hit) when the effects of the Asian crisis hit the American economy.",
"Example 3: We will act again if we have evidence he is (e5:rebuilding) his weapons of mass destruction capabilities, senior officials say.",
"In a bit of television diplomacy, Iraq's deputy foreign minister (e6:responded) from Baghdad in less than one hour, saying that .",
".",
".",
"This paper proposes a new approach to handling these issues in TempRel annotation.",
"First, we introduce multi-axis modeling to represent the temporal structure of events, based on which we anchor events to different semantic axes; only events from the same axis will then be temporally compared (Sec.",
"2).",
"As explained later, those event pairs in Examples 1-3 are difficult because they represent different semantic phenomena and belong to different axes.",
"Second, while we represent an event pair using two time intervals (say, [t 1 start , t 1 end ] and [t 2 start , t 2 end ]), we suggest that comparisons involving end-points (e.g., t 1 end vs. t 2 end ) are typically more difficult than comparing start-points (i.e., t 1 start vs. t 2 start ); we attribute this to the ambiguity of expressing and perceiving durations of events (Coll-Florit and Gennari, 2011) .",
"We believe that this is an important consideration, and we propose in Sec.",
"3 that TempRel annotation should focus on start-points.",
"Using the proposed annotation scheme, a pilot study done by experts achieved a high IAA of .84 (Cohen's Kappa) on a subset of TB-Dense, in contrast to the conventional 60's.",
"In addition to the low IAA issue, TempRel annotation is also known to be labor intensive.",
"Our third contribution is that we facilitate, for the first time, the use of crowdsourcing to collect a new, high quality (under multiple metrics explained later) TempRel dataset.",
"We explain how the crowdsourcing quality was controlled and how vague relations were handled in Sec.",
"4, and present some statistics and the quality of the new dataset in Sec.",
"5.",
"A baseline system is also shown to achieve much better performance on the new dataset, when compared with system performance in the literature (Sec.",
"6).",
"The paper's results are very encouraging and hopefully, this work would significantly benefit research in this area.",
"Temporal Structure of Events Given a set of events, one important question in designing the TempRel annotation task is: which pairs of events should have a relation?",
"The answer to it depends on the modeling of the overall temporal structure of events.",
"Motivation TimeBank (Pustejovsky et al., 2003b) laid the foundation for many later TempRel corpora, e.g., (Bethard et al., 2007; UzZaman et al., 2013; Cas-sidy et al., 2014) .",
"2 In TimeBank, the annotators were allowed to label TempRels between any pairs of events.",
"This setup models the overall structure of events using a general graph, which made annotators inadvertently overlook some pairs, resulting in low IAAs and many false negatives.",
"Example 4: Dense Annotation Scheme.",
"Serbian police (e7:tried) to (e8:eliminate) the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Given 4 NON-GENERIC events above, the dense scheme presents 6 pairs to annotators one by one: (e7, e8), (e7, e1), (e7, e2), (e8, e1), (e8, e2), and (e1, e2).",
"Apparently, not all pairs are well-defined, e.g., (e8, e2) and (e1, e2), but annotators are forced to label all of them.",
"To address this issue, proposed a dense annotation scheme, TB-Dense, which annotates all event pairs within a sliding, two-sentence window (see Example 4).",
"It requires all TempRels between GENERIC 3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NON-GENERIC and the other one for the GENERIC.",
"However, as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts.",
"It is reasonable to argue that the attempt to e1:restore order happened before the conflict where 51 people were e2:killed; or, 51 people had been killed but order had not been restored yet, so e1:restore is after e2:killed.",
"Similarly, in Example 2, service industries and manufacturers were originally expected to be hardest e4:hit but actually e3:showed gains, so e4:hit is before e3:showed; however, one can also argue that the two areas had showed gains but had not been hit, so e4:hit is after e3:showed.",
"Again, e5:rebuilding is a hypothetical event: \"we will act if rebuilding is true\".",
"Readers do not know for sure if \"he is already rebuilding weapons but we have no evidence\", or \"he will be building weapons in the future\", so annotators may disagree on the relation between e5:rebuilding and e6:responded.",
"Despite, importantly, minimizing missing annota-2 EventTimeCorpus (Reimers et al., 2016) is based on TimeBank, but aims at anchoring events onto explicit time expressions in each document rather than annotating TempRels between events, which can be a good complementary to other TempRel datasets.",
"3 For example, lions eat meat is GENERIC.",
"tions, the current dense scheme forces annotators to label many such ill-defined pairs, resulting in low IAA.",
"Multi-Axis Modeling Arguably, an ideal annotator may figure out the above ambiguity by him/herself and mark them as vague, but it is not a feasible requirement for all annotators to stay clear-headed for hours; let alone crowdsourcers.",
"What makes things worse is that, after annotators spend a long time figuring out these difficult cases, whether they disagree with each other or agree on the vagueness, the final decisions for such cases will still be vague.",
"As another way to handle this dilemma, TB-Dense resorted to a 80% confidence rule: annotators were allowed to choose a label if one is 80% sure that it was the writer's intent.",
"However, as pointed out by TB-Dense, annotators are likely to have rather different understandings of 80% confidence and it will still end up with disagreements.",
"In contrast to these annotation difficulties, humans can easily grasp the meaning of news articles, implying a potential gap between the difficulty of the annotation task and the one of understanding the actual meaning of the text.",
"In Examples 1-3, the writers did not intend to explain the TempRels between those pairs, and the original annotators of TimeBank 4 did not label relations between those pairs either, which indicates that both writers and readers did not think the TempRels between these pairs were crucial.",
"Instead, what is crucial in these examples is that \"Serbian police tried to restore order but killed 51 people\", that \"two areas were expected to be hit but showed gains\", and that \"if he rebuilds weapons then we will act.\"",
"To \"restore order\", to be \"hardest hit\", and \"if he was rebuilding\" were only the intention of police, the opinion of economists, and the condition to act, respectively, and whether or not they actually happen is not the focus of those writers.",
"This discussion suggests that a single axis is too restrictive to represent the complex structure of NON-GENERIC events.",
"Instead, we need a modeling which is more restrictive than a general graph so that annotators can focus on relation annotation (rather than looking for pairs first), but also more flexible than a single axis so that ill-defined 4 Recall that they were given the entire article and only salient relations would be annotated.",
"relations are not forcibly annotated.",
"Specifically, we need axes for intentions, opinions, hypotheses, etc.",
"in addition to the main axis of an article.",
"We thus argue for multi-axis modeling, as defined in Table 1 .",
"Following the proposed modeling, Examples 1-3 can be represented as in Fig.",
"1 .",
"This modeling aims at capturing what the author has explicitly expressed and it only asks annotators to look at comparable pairs, rather than forcing them to make decisions on often vaguely defined pairs.",
"In practice, we annotate one axis at a time: we first classify if an event is anchorable onto a given axis (this is also called the anchorability annotation step); then we annotate every pair of anchorable events (i.e., the relation annotation step); finally, we can move to another axis and repeat the two steps above.",
"Note that ruling out cross-axis relations is only a strategy we adopt in this paper to separate well-defined relations from ill-defined relations.",
"We do not claim that cross-axis relations are unimportant; instead, as shown in Fig.",
"2 , we think that cross-axis relations are a different semantic phenomenon that requires additional investigation.",
"Comparisons with Existing Work There have been other proposals of temporal structure modelings (Bramsen et al., 2006; Bethard et al., 2012) , but in general, the semantic phenomena handled in our work are very different and complementary to them.",
"(Bramsen et al., 2006) introduces \"temporal segments\" (a fragment of text that does not exhibit abrupt changes) in the medical domain.",
"Similarly, their temporal segments can also be considered as a special temporal structure modeling.",
"But a key difference is that (Bramsen et al., 2006) only annotates inter-segment relations, ignoring intra-segment ones.",
"Since those segments are usually large chunks of text, the semantics handled in (Bramsen et al., 2006) is in a very coarse granularity (as pointed out by (Bramsen et al., 2006) ) and is thus different from ours.",
"(Bethard et al., 2012) proposes a tree structure for children's stories, which \"typically have simpler temporal structures\", as they pointed out.",
"Moreover, in their annotation, an event can only be linked to a single nearby event, even if multiple nearby events may exist, whereas we do not have such restrictions.",
"In addition, some of the semantic phenomena in Table 1 have been discussed in existing work.",
"Here we compare with them for a better positioning of the proposed scheme.",
"Axis Projection TB-Dense handled the incomparability between main-axis events and HYPOTHESIS/NEGATION by treating an event as having occurred if the event is HYPOTHESIS/NEGATION.",
"5 In our multiaxis modeling, the strategy adopted by TB-Dense falls into a more general approach, \"axis projection\".",
"That is, projecting events across different axes to handle the incomparability between any two axes (not limited to HYPOTHE-SIS/NEGATION).",
"Axis projection works well for certain event pairs like Asian crisis and e4:hardest hit in Example 2: as in Fig.",
"1 , Asian crisis is before expected, which is again before e4:hardest hit, so Asian crisis is before e4:hardest hit.",
"Generally, however, since there is no direct evidence that can guide the projection, annotators may have different projections (imagine projecting e5:rebuilding onto the main axis: is it in the past or in the future?).",
"As a result, axis projec-tion requires many specially designed guidelines or strong external knowledge.",
"Annotators have to rigidly follow the sometimes counter-intuitive guidelines or \"guess\" a label instead of looking for evidence in the text.",
"When strong external knowledge is involved in axis projection, it becomes a reasoning process and the resulting relations are a different type.",
"For example, a reader may reason that in Example 3, it is well-known that they did \"act again\", implying his e5:rebuilding had happened and is before e6:responded.",
"Another example is in Fig.",
"2 .",
"It is obvious that relations based on these projections are not the same with and more challenging than those same-axis relations, so in the current stage, we should focus on same-axis relations only.",
"tended the conference, the projection of submit a paper onto the main axis is clearly before attended.",
"However, this projection requires strong external knowledge that a paper should be submitted before attending a conference.",
"Again, this projection is only a guess based on our external knowledge and it is still open whether the paper is submitted or not.",
"Introduction of the Orthogonal Axes Another prominent difference to earlier work is the introduction of orthogonal axes, which has not been used in any existing work as we know.",
"A special property is that the intersection event of two axes can be compared to events from both, which can sometimes bridge events, e.g., in Fig.",
"1 , Asian crisis is seemingly before hardest hit due to their connections to expected.",
"Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well.",
"However, the \"hardest hit\" in \"Asian crisis before hardest hit\" is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true.",
"Nevertheless, OPINIONS are not always true and INTENTIONS are not always fulfilled.",
"In Example 5, e9:sponsoring and e10:resolve are the opinions of the West and the speaker, respectively; whether or not they are true depends on the au-thors' implications or the readers' understandings, which is often beyond the scope of TempRel annotation.",
"6 Example 6 demonstrates a similar situation for INTENTIONS: when reading the sentence of e11:report, people are inclined to believe that it is fulfilled.",
"But if we read the sentence of e12:report, we have reason to believe that it is not.",
"When it comes to e13:tell, it is unclear if everyone told the truth.",
"The existence of such examples indicates that orthogonal axes are a better modeling for INTENTIONS and OPINIONS.",
"Example 5: Opinion events may not always be true.",
"He is ostracized by the West for (e9:sponsoring) terrorism.",
"We need to (e10:resolve) the deep-seated causes that have resulted in these problems.",
"Example 6: Intentions may not always be fulfilled.",
"A passerby called the police to (e11:report) the body.",
"A passerby called the police to (e12:report) the body.",
"Unfortunately, the line was busy.",
"I asked everyone to (e13:tell) the truth.",
"Differences from Factuality Event modality have been discussed in many existing event annotation schemes, e.g., Event Nugget (Mitamura et al., 2015) , Rich ERE , and RED.",
"Generally, an event is classified as Actual or Non-Actual, a.k.a.",
"factuality (Saurí and Pustejovsky, 2009; Lee et al., 2015) .",
"The main-axis events defined in this paper seem to be very similar to Actual events, but with several important differences: First, future events are Non-Actual because they indeed have not happened, but they may be on the main axis.",
"Second, events that are not on the main axis can also be Actual events, e.g., intentions that are fulfilled, or opinions that are true.",
"Third, as demonstrated by Examples 5-6, identifying anchorability as defined in Table 1 is relatively easy, but judging if an event actually happened is often a high-level understanding task that requires an understanding of the entire document or external knowledge.",
"Interested readers are referred to Appendix B for a detailed analysis of the difference between Anchorable (onto the main axis) and Actual on a subset of RED.",
"Interval Splitting All existing annotation schemes adopt the interval representation of events (Allen, 1984) and there are 13 relations between two intervals (for readers who are not familiar with it, please see Fig.",
"4 in the appendix).",
"To reduce the burden of annotators, existing schemes often resort to a reduced set of the 13 relations.",
"For instance, Verhagen et al.",
"(2007) Let [t 1 start , t 1 end ] and [t 2 start , t 2 end ] be the time intervals of two events (with the implicit assumption that t start t end ).",
"Instead of reducing the relations between two intervals, we try to explicitly compare the time points (see Fig.",
"3 ).",
"In this way, the label set is simply before, after and equal, 7 while the expressivity remains the same.",
"This interval splitting technique has also been used in (Raghavan et al., 2012) .",
"Figure 3 : The comparison of two event time intervals, [t 1 start , t 1 end ] and [t 2 start , t 2 end ], can be decomposed into four comparisons t 1 start vs. t 2 start , t 1 start vs. t 2 end , t 1 end vs. t 2 start , and t 1 end vs. t 2 end , without loss of generality.",
"[!",
"\"#$%# & , !",
"()* & ] [!",
"\"#$%# + , !",
"()* + ] time In addition to same expressivity, interval splitting can provide even more information when the relation between two events is vague.",
"In the conventional setting, imagine that the annotators find that the relation between two events can be either before or before and overlap.",
"Then the resulting annotation will have to be vague, although the annotators actually agree on the relation between t 1 start and t 2 start .",
"Using interval splitting, however, such information can be preserved.",
"An obvious downside of interval splitting is the increased number of annotations needed (4 point comparisons vs. 1 interval comparison).",
"In practice, however, it is usually much fewer than 4 comparisons.",
"For example, when we see t 1 end < t 2 start (as in Fig.",
"3) , the other three can be skipped because they can all be inferred.",
"Moreover, although the number of annotations is increased, the work load for human annotators may still be the same, because even in the conventional scheme, they still need to think of the relations between start-and end-points before they can make a decision.",
"Ambiguity of End-Points During our pilot annotation, the annotation quality dropped significantly when the annotators needed to reason about relations involving end-points of events.",
"Table 2 shows four metrics of task difficulty when only t 1 start vs. t 2 start or t 1 end vs. t 2 end are annotated.",
"Non-anchorable events were removed for both jobs.",
"The first two metrics, qualifying pass rate and survival rate are related to the two quality control protocols (see Sec.",
"4.1 for details).",
"We can see that when annotating the relations between end-points, only one out of ten crowdsourcers (11%) could successfully pass our qualifying test; and even if they had passed it, half of them (56%) would have been kicked out in the middle of the task.",
"The third line is the overall accuracy on gold set from all crowdsourcers (excluding those who did not pass the qualifying test), which drops from 67% to 37% when annotating end-end relations.",
"The last line is the average response time per annotation and we can see that it takes much longer to label an end-end TempRel (52s) than a start-start TempRel (33s).",
"This important discovery indicates that the TempRels between end-points is probably governed by a different linguistic phenomenon.",
"Table 2 : Annotations involving the end-points of events are found to be much harder than only comparing the start-points.",
"We hypothesize that the difficulty is a mixture of how durative events are expressed (by authors) and perceived (by readers) in natural language.",
"In cognitive psychology, Coll-Florit and Gennari (2011) discovered that human readers take longer to perceive durative events than punctual events, e.g., owe 50 bucks vs. lost 50 bucks.",
"From the writer's standpoint, durations are usually fuzzy (Schockaert and De Cock, 2008) , or assumed to be a prior knowledge of readers (e.g., college takes 4 years and watching an NBA game takes a few hours), and thus not always written explicitly.",
"Given all these reasons, we ignore the comparison of end-points in this work, although event duration is indeed, another important task.",
"Annotation Scheme Design To summarize, with the proposed multi-axis modeling (Sec.",
"2) and interval splitting (Sec.",
"3), our annotation scheme is two-step.",
"First, we mark every event candidate as being temporally Anchorable or not (based on the time axis we are working on).",
"Second, we adopt the dense annotation scheme to label TempRels only between Anchorable events.",
"Note that we only work on verb events in this paper, so non-verb event candidates are also deleted in a preprocessing step.",
"We design crowdsourcing tasks for both steps and as we show later, high crowdsourcing quality was achieved on both tasks.",
"In this section, we will discuss some practical issues.",
"Quality Control for Crowdsourcing We take advantage of the quality control feature in CrowdFlower in our crowdsourcing jobs.",
"For any job, a set of examples are annotated by experts beforehand, which is considered gold and will serve two purposes.",
"(i) Qualifying test: Any crowdsourcer who wants to work on this job has to pass with 70% accuracy on 10 questions randomly selected from the gold set.",
"(ii) Surviving test: During the annotation process, questions from the gold set will be randomly given to crowdsourcers without notice, and one has to maintain 70% accuracy on the gold set till the end of the annotation; otherwise, he or she will be forbidden from working on this job anymore and all his/her annotations will be discarded.",
"At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision.",
"Vague Relations How to handle vague relations is another issue in temporal annotation.",
"In non-dense schemes, annotators usually skip the annotation of a vague pair.",
"In dense schemes, a majority agreement rule is applied as a postprocessing step to back off a decision to vague when annotators cannot pass a majority vote , which reminds us that annotators often label a vague relation as non-vague due to lack of thinking.",
"We decide to proactively reduce the possibility of such situations.",
"As mentioned earlier, our label set for t 1 start vs. t 2 start is before, after, equal and vague.",
"We ask two questions: Q1=Is it possible that t 1 start is before t 2 start ?",
"Q2=Is it possible that t 2 start is before t 1 start ?",
"Let the an-swers be A1 and A2.",
"Then we have a oneto-one mapping as follows: A1=A2=yes7 !vague, A1=A2=no7 !equal, A1=yes, A2=no7 !before, and A1=no, A2=yes7 !after.",
"An advantage is that one will be prompted to think about all possibilities, thus reducing the chance of overlook.",
"Finally, the annotation interface we used is shown in Appendix C. Corpus Statistics and Quality In this section, we first focus on annotations on the main axis, which is usually the primary storyline and thus has most events.",
"Before launching the crowdsourcing tasks, we checked the IAA between two experts on a subset of TB-Dense (about 100 events and 400 relations).",
"A Cohen's Kappa of .85 was achieved in the first step: anchorability annotation.",
"Only those events that both experts labeled Anchorable were kept before they moved onto the second step: relation annotation, for which the Cohen's Kappa was .90 for Q1 and .87 for Q2.",
"Table 3 furthermore shows the distribution, Cohen's Kappa, and F 1 of each label.",
"We can see the Kappa and F 1 of vague (=.75, F 1 =.81) are generally lower than those of the other labels, confirming that temporal vagueness is a more difficult semantic phenomenon.",
"Nevertheless, the overall IAA shown in Table 3 is a significant improvement compared to existing datasets.",
"With the improved IAA confirmed by experts, we sequentially launched the two-step crowdsourcing tasks through CrowdFlower on top of the same 36 documents of TB-Dense.",
"To evaluate how well the crowdsourcers performed on our task, we calculate two quality metrics: accuracy on the gold set and the Worker Agreement with Aggregate (WAWA).",
"WAWA indicates the average number of crowdsourcers' responses agreed with the aggregate answer (we used majority aggregation for each question).",
"For example, if N individual responses were obtained in total, and n of them were correct when compared to the aggregate answer, then WAWA is simply n/N .",
"In the first step, crowdsourcers labeled 28% of the events as Non-Anchorable to the main axis, with an accuracy on the gold of .86 and a WAWA of .79.",
"With Non-Anchorable events filtered, the relation annotation step was launched as another crowdsourcing task.",
"The label distribution is b=.",
"50, a=.28, e=.03, and v=.19 (consistent with Table 3 ).",
"In Table 4 , we show the annotation quality of this step using accuracy on the gold set and WAWA.",
"We can see that the crowdsourcers achieved a very good performance on the gold set, indicating that they are consistent with the authors who created the gold set; these crowdsourcers also achieved a high-level agreement under the WAWA metric, indicating that they are consistent among themselves.",
"These two metrics indicate that the annotation task is now well-defined and easy to understand even by non-experts.",
"Table 4 : Quality analysis of the relation annotation step of MATRES.",
"\"Q1\" and \"Q2\" refer to the two questions crowdsourcers were asked (see Sec.",
"4.2 for details).",
"Line 1 measures the level of consistency between crowdsourcers and the authors and line 2 measures the level of consistency among the crowdsourcers themselves.",
"We continued to annotate INTENTION and OPINION which create orthogonal branches on the main axis.",
"In the first step, crowdsourcers achieved an accuracy on gold of .82 and a WAWA of .89.",
"Since only 16% of the events are in this category and these axes are usually very short (e.g., allocate funds to build a museum.",
"), the annotation task is relatively small and two experts took the second step and achieved an agreement of .86 (F 1 ).",
"We name our new dataset MATRES for Multi-Axis Temporal RElations for Start-points.",
"Each individual judgement cost us $0.01 and MATRES in total cost about $400 for 36 documents.",
"Comparison to TB-Dense To get another checkpoint of the quality of the new dataset, we compare with the annotations of TB-Dense.",
"TB-Dense has 1.1K verb events, between which 3.4K event-event (EE) relations are annotated.",
"In the new dataset, 72% of the events (0.8K) are anchored onto the main axis, resulting in 1.6K EE relations, and 16% (0.2K) are anchored onto orthogonal axes, resulting in 0.2K EE relations.",
"The following comparison is based on the 1.8K EE relations in common.",
"Moreover, since TB-Dense annotations are for intervals instead of start-points only, we converted TB-Dense's interval relations to start-point relations (e.g., if A includes B, then t A start is before t B start ).",
"The confusion matrix is shown in Table 5 .",
"A few remarks about how to understand it: First, when TB-Dense labels before or after, MATRES also has a high-probability of having the same label (b=455/513=.89, a=309/438=.71); when MATRES labels vague, TB-Dense is also very likely to label vague (v=192/312=.62).",
"This indicates the high agreement level between the two datasets if the interval-or point-based annotation difference is ruled out.",
"Second, many vague relations in TB-Dense are labeled as before, after or equal in MATRES.",
"This is expected because TB-Dense annotates relations between intervals, while MATRES annotates start-points.",
"When durative events are involved, the problem usually becomes more difficult and interval-based annotation is more likely to label vague (see earlier discussions in Sec.",
"3).",
"Example 7 shows three typical cases, where e14:became, e17:backed, e18:rose and e19:extending can be considered durative.",
"If only their start-points are considered, the crowdsourcers were correct in labeling e14 before e15, e16 after e17, and e18 equal to e19, although TB-Dense says vague for all of them.",
"Third, equal seems to be the relation that the two dataset mostly disagree on, which is probably due to crowdsourcers' lack of understanding in time granularity and event coreference.",
"Although equal relations only constitutes a small portion in all relations, it needs further investigation.",
"Baseline System We develop a baseline system for TempRel extraction on MATRES, assuming that all the events and axes are given.",
"The following commonly-Example 7: Typical cases that TB-Dense annotated vague but MATRES annotated before, after, and equal, respectively.",
"At one point , when it (e14:became) clear controllers could not contact the plane, someone (e15:said) a prayer.",
"TB-Dense: vague; MATRES: before The US is bolstering its military presence in the gulf, as President Clinton (e16:discussed) the Iraq crisis with the one ally who has (e17:backed) his threat of force, British prime minister Tony Blair.",
"TB-Dense: vague; MATRES: after Average hourly earnings of nonsupervisory employees (e18:rose) to $12.51.",
"The gain left wages 3.8 percent higher than a year earlier, (e19:extending) a trend that has given back to workers some of the earning power they lost to inflation in the last decade.",
"TB-Dense: vague; MATRES: equal used features for each event pair are used: (i) The part-of-speech (POS) tags of each individual event and of its neighboring three words.",
"(ii) The sentence and token distance between the two events.",
"(iii) The appearance of any modal verb between the two event mentions in text (i.e., will, would, can, could, may and might).",
"(iv) The appearance of any temporal connectives between the two event mentions (e.g., before, after and since).",
"(v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998) .",
"(vi) Whether the input event mentions have a common derivational form derived from WordNet.",
"(vii) The head words of the preposition phrases that cover each event, respectively.",
"And (viii) event properties such as Aspect, Modality, and Polarity that come with the TimeBank dataset and are commonly used as features.",
"The proposed baseline system uses the averaged perceptron algorithm to classify the relation between each event pair into one of the four relation types.",
"We adopted the same train/dev/test split of TB-Dense, where there are 22 documents in train, 5 in dev, and 9 in test.",
"Parameters were tuned on the train-set to maximize its F 1 on the dev-set, after which the classifier was retrained on the union of train and dev.",
"A detailed analysis of the baseline system is provided in Table 6 .",
"The performance on equal and vague is lower than on before and after, probably due to shortage in these labels in the training data and the inherent difficulty in event coreference and temporal vagueness.",
"We can see, though, that the overall performance on MATRES is much better than those in the literature for TempRel extraction, which used to be in the low 50's Ning et al., 2017 ).",
"The same system was also retrained and tested on the original annotations of TB-Dense (Line \"Original\"), which confirms the significant improvement if the proposed annotation scheme is used.",
"Note that we do not mean to say that the proposed baseline system itself is better than other existing algorithms, but rather that the proposed annotation scheme and the resulting dataset lead to better defined machine learning tasks.",
"In the future, more data can be collected and used with advanced techniques such as ILP (Do et al., 2012) , structured learning (Ning et al., 2017) or multi-sieve .",
"Table 6 : Performance of the proposed baseline system on MATRES.",
"Line \"Original\" is the same system retrained on the original TB-Dense and tested on the same subset of event pairs.",
"Due to the limited number of equal examples, the system did not make any equal predictions on the testset.",
"Conclusion This paper proposes a new scheme for TempRel annotation between events, simplifying the task by focusing on a single time axis at a time.",
"We have also identified that end-points of events is a major source of confusion during annotation due to reasons beyond the scope of TempRel annotation, and proposed to focus on start-points only and handle the end-points issue in further investigation (e.g., in event duration annotation tasks).",
"Pilot study by expert annotators shows significant IAA improvements compared to literature values, indicating a better task definition under the proposed scheme.",
"This further enables the usage of crowdsourcing to collect a new dataset, MATRES, at a lower time cost.",
"Analysis shows that MATRES, albeit crowdsourced, has achieved a reasonably good agreement level, as confirmed by its performance on the gold set (agreement with the authors), the WAWA metric (agreement with the crowdsourcers themselves), and consistency with TB-Dense (agreement with an existing dataset).",
"Given the fact that existing schemes suffer from low IAAs and lack of data, we hope that the findings in this work would provide a good start towards understanding more sophisticated semantic phenomena in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.3.1",
"2.3.2",
"2.3.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Temporal Structure of Events",
"Motivation",
"Multi-Axis Modeling",
"Comparisons with Existing Work",
"Axis Projection",
"Introduction of the Orthogonal Axes",
"Differences from Factuality",
"Interval Splitting",
"Ambiguity of End-Points",
"Annotation Scheme Design",
"Quality Control for Crowdsourcing",
"Vague Relations",
"Corpus Statistics and Quality",
"Comparison to TB-Dense",
"Baseline System",
"Conclusion"
]
} | GEM-SciDuet-train-123#paper-1336#slide-8 | 1 multi axis modeling | Police tried to eliminate the pro-independence army and restore order. At least 51 people were killed in clashes between police and citizens in the troubled region.
Instead, we argue that its an Intention Axis
It contains events that are intentions: restore and eliminate
and intersects with the real world axis at the event that invokes these intentions: tried
Real world axis police tried 51 people killed
So far, we introduced the intention axis and distinguished it from
The paper extends these ideas to more axes and discusses their difference form (non-)actuality axes
Event Type Time Axis
intention, opinion orthogonal axis
hypothesis, generic parallel axis
Negation not on any axis
static, recurrent not considered now
all others main axis | Police tried to eliminate the pro-independence army and restore order. At least 51 people were killed in clashes between police and citizens in the troubled region.
Instead, we argue that its an Intention Axis
It contains events that are intentions: restore and eliminate
and intersects with the real world axis at the event that invokes these intentions: tried
Real world axis police tried 51 people killed
So far, we introduced the intention axis and distinguished it from
The paper extends these ideas to more axes and discusses their difference form (non-)actuality axes
Event Type Time Axis
intention, opinion orthogonal axis
hypothesis, generic parallel axis
Negation not on any axis
static, recurrent not considered now
all others main axis | [] |
GEM-SciDuet-train-123#paper-1336#slide-9 | 1336 | A Multi-Axis Annotation Scheme for Event Temporal Relations | Existing temporal relation (TempRel) annotation schemes often have low interannotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition. This paper proposes a new multi-axis modeling to better capture the temporal structure of events. In addition, we identify that event end-points are a major source of confusion in annotation, so we also propose to annotate TempRels based on start-points only. A pilot expert annotation effort using the proposed scheme shows significant improvement in IAA from the conventional 60's to 80's (Cohen's Kappa). This better-defined annotation scheme further enables the use of crowdsourcing to alleviate the labor intensity for each annotator. We hope that this work can foster more interesting studies towards event understanding. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259
],
"paper_content_text": [
"Introduction Temporal relation (TempRel) extraction is an important task for event understanding, and it has drawn much attention in the natural language processing (NLP) community recently (UzZaman et al., 2013; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015 Bethard et al., , 2016 Bethard et al., , 2017 Leeuwenberg and Moens, 2017; Ning et al., 2017 Ning et al., , 2018a .",
"Initiated by TimeBank (TB) (Pustejovsky et al., 2003b) , a number of TempRel datasets have been collected, including but not limited to the verbclause augmentation to TB (Bethard et al., 2007) , TempEval1-3 (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , TimeBank-Dense (TB-Dense) , EventTimeCorpus (Reimers et al., 2016) , and datasets with both temporal and other types of relations (e.g., coreference and causality) such as CaTeRs (Mostafazadeh et al., 2016) and RED (O'Gorman et al., 2016) .",
"These datasets were annotated by experts, but most still suffered from low inter-annotator agreements (IAA).",
"For instance, the IAAs of TB-Dense, RED and THYME-TimeML (Styler IV et al., 2014) were only below or near 60% (given that events are already annotated).",
"Since a low IAA usually indicates that the task is difficult even for humans (see Examples 1-3), the community has been looking into ways to simplify the task, by reducing the label set, and by breaking up the overall, complex task into subtasks (e.g., getting agreement on which event pairs should have a relation, and then what that relation should be) (Mostafazadeh et al., 2016; O'Gorman et al., 2016) .",
"In contrast to other existing datasets, Bethard et al.",
"(2007) achieved an agreement as high as 90%, but the scope of its annotation was narrowed down to a very special verb-clause structure.",
"(e1, e2), (e3, e4), and (e5, e6): TempRels that are difficult even for humans.",
"Note that only relevant events are highlighted here.",
"Example 1: Serbian police tried to eliminate the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Example 2: Service industries (e3:showed) solid job gains, as did manufacturers, two areas expected to be hardest (e4:hit) when the effects of the Asian crisis hit the American economy.",
"Example 3: We will act again if we have evidence he is (e5:rebuilding) his weapons of mass destruction capabilities, senior officials say.",
"In a bit of television diplomacy, Iraq's deputy foreign minister (e6:responded) from Baghdad in less than one hour, saying that .",
".",
".",
"This paper proposes a new approach to handling these issues in TempRel annotation.",
"First, we introduce multi-axis modeling to represent the temporal structure of events, based on which we anchor events to different semantic axes; only events from the same axis will then be temporally compared (Sec.",
"2).",
"As explained later, those event pairs in Examples 1-3 are difficult because they represent different semantic phenomena and belong to different axes.",
"Second, while we represent an event pair using two time intervals (say, [t 1 start , t 1 end ] and [t 2 start , t 2 end ]), we suggest that comparisons involving end-points (e.g., t 1 end vs. t 2 end ) are typically more difficult than comparing start-points (i.e., t 1 start vs. t 2 start ); we attribute this to the ambiguity of expressing and perceiving durations of events (Coll-Florit and Gennari, 2011) .",
"We believe that this is an important consideration, and we propose in Sec.",
"3 that TempRel annotation should focus on start-points.",
"Using the proposed annotation scheme, a pilot study done by experts achieved a high IAA of .84 (Cohen's Kappa) on a subset of TB-Dense, in contrast to the conventional 60's.",
"In addition to the low IAA issue, TempRel annotation is also known to be labor intensive.",
"Our third contribution is that we facilitate, for the first time, the use of crowdsourcing to collect a new, high quality (under multiple metrics explained later) TempRel dataset.",
"We explain how the crowdsourcing quality was controlled and how vague relations were handled in Sec.",
"4, and present some statistics and the quality of the new dataset in Sec.",
"5.",
"A baseline system is also shown to achieve much better performance on the new dataset, when compared with system performance in the literature (Sec.",
"6).",
"The paper's results are very encouraging and hopefully, this work would significantly benefit research in this area.",
"Temporal Structure of Events Given a set of events, one important question in designing the TempRel annotation task is: which pairs of events should have a relation?",
"The answer to it depends on the modeling of the overall temporal structure of events.",
"Motivation TimeBank (Pustejovsky et al., 2003b) laid the foundation for many later TempRel corpora, e.g., (Bethard et al., 2007; UzZaman et al., 2013; Cas-sidy et al., 2014) .",
"2 In TimeBank, the annotators were allowed to label TempRels between any pairs of events.",
"This setup models the overall structure of events using a general graph, which made annotators inadvertently overlook some pairs, resulting in low IAAs and many false negatives.",
"Example 4: Dense Annotation Scheme.",
"Serbian police (e7:tried) to (e8:eliminate) the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Given 4 NON-GENERIC events above, the dense scheme presents 6 pairs to annotators one by one: (e7, e8), (e7, e1), (e7, e2), (e8, e1), (e8, e2), and (e1, e2).",
"Apparently, not all pairs are well-defined, e.g., (e8, e2) and (e1, e2), but annotators are forced to label all of them.",
"To address this issue, proposed a dense annotation scheme, TB-Dense, which annotates all event pairs within a sliding, two-sentence window (see Example 4).",
"It requires all TempRels between GENERIC 3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NON-GENERIC and the other one for the GENERIC.",
"However, as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts.",
"It is reasonable to argue that the attempt to e1:restore order happened before the conflict where 51 people were e2:killed; or, 51 people had been killed but order had not been restored yet, so e1:restore is after e2:killed.",
"Similarly, in Example 2, service industries and manufacturers were originally expected to be hardest e4:hit but actually e3:showed gains, so e4:hit is before e3:showed; however, one can also argue that the two areas had showed gains but had not been hit, so e4:hit is after e3:showed.",
"Again, e5:rebuilding is a hypothetical event: \"we will act if rebuilding is true\".",
"Readers do not know for sure if \"he is already rebuilding weapons but we have no evidence\", or \"he will be building weapons in the future\", so annotators may disagree on the relation between e5:rebuilding and e6:responded.",
"Despite, importantly, minimizing missing annota-2 EventTimeCorpus (Reimers et al., 2016) is based on TimeBank, but aims at anchoring events onto explicit time expressions in each document rather than annotating TempRels between events, which can be a good complementary to other TempRel datasets.",
"3 For example, lions eat meat is GENERIC.",
"tions, the current dense scheme forces annotators to label many such ill-defined pairs, resulting in low IAA.",
"Multi-Axis Modeling Arguably, an ideal annotator may figure out the above ambiguity by him/herself and mark them as vague, but it is not a feasible requirement for all annotators to stay clear-headed for hours; let alone crowdsourcers.",
"What makes things worse is that, after annotators spend a long time figuring out these difficult cases, whether they disagree with each other or agree on the vagueness, the final decisions for such cases will still be vague.",
"As another way to handle this dilemma, TB-Dense resorted to a 80% confidence rule: annotators were allowed to choose a label if one is 80% sure that it was the writer's intent.",
"However, as pointed out by TB-Dense, annotators are likely to have rather different understandings of 80% confidence and it will still end up with disagreements.",
"In contrast to these annotation difficulties, humans can easily grasp the meaning of news articles, implying a potential gap between the difficulty of the annotation task and the one of understanding the actual meaning of the text.",
"In Examples 1-3, the writers did not intend to explain the TempRels between those pairs, and the original annotators of TimeBank 4 did not label relations between those pairs either, which indicates that both writers and readers did not think the TempRels between these pairs were crucial.",
"Instead, what is crucial in these examples is that \"Serbian police tried to restore order but killed 51 people\", that \"two areas were expected to be hit but showed gains\", and that \"if he rebuilds weapons then we will act.\"",
"To \"restore order\", to be \"hardest hit\", and \"if he was rebuilding\" were only the intention of police, the opinion of economists, and the condition to act, respectively, and whether or not they actually happen is not the focus of those writers.",
"This discussion suggests that a single axis is too restrictive to represent the complex structure of NON-GENERIC events.",
"Instead, we need a modeling which is more restrictive than a general graph so that annotators can focus on relation annotation (rather than looking for pairs first), but also more flexible than a single axis so that ill-defined 4 Recall that they were given the entire article and only salient relations would be annotated.",
"relations are not forcibly annotated.",
"Specifically, we need axes for intentions, opinions, hypotheses, etc.",
"in addition to the main axis of an article.",
"We thus argue for multi-axis modeling, as defined in Table 1 .",
"Following the proposed modeling, Examples 1-3 can be represented as in Fig.",
"1 .",
"This modeling aims at capturing what the author has explicitly expressed and it only asks annotators to look at comparable pairs, rather than forcing them to make decisions on often vaguely defined pairs.",
"In practice, we annotate one axis at a time: we first classify if an event is anchorable onto a given axis (this is also called the anchorability annotation step); then we annotate every pair of anchorable events (i.e., the relation annotation step); finally, we can move to another axis and repeat the two steps above.",
"Note that ruling out cross-axis relations is only a strategy we adopt in this paper to separate well-defined relations from ill-defined relations.",
"We do not claim that cross-axis relations are unimportant; instead, as shown in Fig.",
"2 , we think that cross-axis relations are a different semantic phenomenon that requires additional investigation.",
"Comparisons with Existing Work There have been other proposals of temporal structure modelings (Bramsen et al., 2006; Bethard et al., 2012) , but in general, the semantic phenomena handled in our work are very different and complementary to them.",
"(Bramsen et al., 2006) introduces \"temporal segments\" (a fragment of text that does not exhibit abrupt changes) in the medical domain.",
"Similarly, their temporal segments can also be considered as a special temporal structure modeling.",
"But a key difference is that (Bramsen et al., 2006) only annotates inter-segment relations, ignoring intra-segment ones.",
"Since those segments are usually large chunks of text, the semantics handled in (Bramsen et al., 2006) is in a very coarse granularity (as pointed out by (Bramsen et al., 2006) ) and is thus different from ours.",
"(Bethard et al., 2012) proposes a tree structure for children's stories, which \"typically have simpler temporal structures\", as they pointed out.",
"Moreover, in their annotation, an event can only be linked to a single nearby event, even if multiple nearby events may exist, whereas we do not have such restrictions.",
"In addition, some of the semantic phenomena in Table 1 have been discussed in existing work.",
"Here we compare with them for a better positioning of the proposed scheme.",
"Axis Projection TB-Dense handled the incomparability between main-axis events and HYPOTHESIS/NEGATION by treating an event as having occurred if the event is HYPOTHESIS/NEGATION.",
"5 In our multiaxis modeling, the strategy adopted by TB-Dense falls into a more general approach, \"axis projection\".",
"That is, projecting events across different axes to handle the incomparability between any two axes (not limited to HYPOTHE-SIS/NEGATION).",
"Axis projection works well for certain event pairs like Asian crisis and e4:hardest hit in Example 2: as in Fig.",
"1 , Asian crisis is before expected, which is again before e4:hardest hit, so Asian crisis is before e4:hardest hit.",
"Generally, however, since there is no direct evidence that can guide the projection, annotators may have different projections (imagine projecting e5:rebuilding onto the main axis: is it in the past or in the future?).",
"As a result, axis projec-tion requires many specially designed guidelines or strong external knowledge.",
"Annotators have to rigidly follow the sometimes counter-intuitive guidelines or \"guess\" a label instead of looking for evidence in the text.",
"When strong external knowledge is involved in axis projection, it becomes a reasoning process and the resulting relations are a different type.",
"For example, a reader may reason that in Example 3, it is well-known that they did \"act again\", implying his e5:rebuilding had happened and is before e6:responded.",
"Another example is in Fig.",
"2 .",
"It is obvious that relations based on these projections are not the same with and more challenging than those same-axis relations, so in the current stage, we should focus on same-axis relations only.",
"tended the conference, the projection of submit a paper onto the main axis is clearly before attended.",
"However, this projection requires strong external knowledge that a paper should be submitted before attending a conference.",
"Again, this projection is only a guess based on our external knowledge and it is still open whether the paper is submitted or not.",
"Introduction of the Orthogonal Axes Another prominent difference to earlier work is the introduction of orthogonal axes, which has not been used in any existing work as we know.",
"A special property is that the intersection event of two axes can be compared to events from both, which can sometimes bridge events, e.g., in Fig.",
"1 , Asian crisis is seemingly before hardest hit due to their connections to expected.",
"Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well.",
"However, the \"hardest hit\" in \"Asian crisis before hardest hit\" is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true.",
"Nevertheless, OPINIONS are not always true and INTENTIONS are not always fulfilled.",
"In Example 5, e9:sponsoring and e10:resolve are the opinions of the West and the speaker, respectively; whether or not they are true depends on the au-thors' implications or the readers' understandings, which is often beyond the scope of TempRel annotation.",
"6 Example 6 demonstrates a similar situation for INTENTIONS: when reading the sentence of e11:report, people are inclined to believe that it is fulfilled.",
"But if we read the sentence of e12:report, we have reason to believe that it is not.",
"When it comes to e13:tell, it is unclear if everyone told the truth.",
"The existence of such examples indicates that orthogonal axes are a better modeling for INTENTIONS and OPINIONS.",
"Example 5: Opinion events may not always be true.",
"He is ostracized by the West for (e9:sponsoring) terrorism.",
"We need to (e10:resolve) the deep-seated causes that have resulted in these problems.",
"Example 6: Intentions may not always be fulfilled.",
"A passerby called the police to (e11:report) the body.",
"A passerby called the police to (e12:report) the body.",
"Unfortunately, the line was busy.",
"I asked everyone to (e13:tell) the truth.",
"Differences from Factuality Event modality have been discussed in many existing event annotation schemes, e.g., Event Nugget (Mitamura et al., 2015) , Rich ERE , and RED.",
"Generally, an event is classified as Actual or Non-Actual, a.k.a.",
"factuality (Saurí and Pustejovsky, 2009; Lee et al., 2015) .",
"The main-axis events defined in this paper seem to be very similar to Actual events, but with several important differences: First, future events are Non-Actual because they indeed have not happened, but they may be on the main axis.",
"Second, events that are not on the main axis can also be Actual events, e.g., intentions that are fulfilled, or opinions that are true.",
"Third, as demonstrated by Examples 5-6, identifying anchorability as defined in Table 1 is relatively easy, but judging if an event actually happened is often a high-level understanding task that requires an understanding of the entire document or external knowledge.",
"Interested readers are referred to Appendix B for a detailed analysis of the difference between Anchorable (onto the main axis) and Actual on a subset of RED.",
"Interval Splitting All existing annotation schemes adopt the interval representation of events (Allen, 1984) and there are 13 relations between two intervals (for readers who are not familiar with it, please see Fig.",
"4 in the appendix).",
"To reduce the burden of annotators, existing schemes often resort to a reduced set of the 13 relations.",
"For instance, Verhagen et al.",
"(2007) Let [t 1 start , t 1 end ] and [t 2 start , t 2 end ] be the time intervals of two events (with the implicit assumption that t start t end ).",
"Instead of reducing the relations between two intervals, we try to explicitly compare the time points (see Fig.",
"3 ).",
"In this way, the label set is simply before, after and equal, 7 while the expressivity remains the same.",
"This interval splitting technique has also been used in (Raghavan et al., 2012) .",
"Figure 3 : The comparison of two event time intervals, [t 1 start , t 1 end ] and [t 2 start , t 2 end ], can be decomposed into four comparisons t 1 start vs. t 2 start , t 1 start vs. t 2 end , t 1 end vs. t 2 start , and t 1 end vs. t 2 end , without loss of generality.",
"[!",
"\"#$%# & , !",
"()* & ] [!",
"\"#$%# + , !",
"()* + ] time In addition to same expressivity, interval splitting can provide even more information when the relation between two events is vague.",
"In the conventional setting, imagine that the annotators find that the relation between two events can be either before or before and overlap.",
"Then the resulting annotation will have to be vague, although the annotators actually agree on the relation between t 1 start and t 2 start .",
"Using interval splitting, however, such information can be preserved.",
"An obvious downside of interval splitting is the increased number of annotations needed (4 point comparisons vs. 1 interval comparison).",
"In practice, however, it is usually much fewer than 4 comparisons.",
"For example, when we see t 1 end < t 2 start (as in Fig.",
"3) , the other three can be skipped because they can all be inferred.",
"Moreover, although the number of annotations is increased, the work load for human annotators may still be the same, because even in the conventional scheme, they still need to think of the relations between start-and end-points before they can make a decision.",
"Ambiguity of End-Points During our pilot annotation, the annotation quality dropped significantly when the annotators needed to reason about relations involving end-points of events.",
"Table 2 shows four metrics of task difficulty when only t 1 start vs. t 2 start or t 1 end vs. t 2 end are annotated.",
"Non-anchorable events were removed for both jobs.",
"The first two metrics, qualifying pass rate and survival rate are related to the two quality control protocols (see Sec.",
"4.1 for details).",
"We can see that when annotating the relations between end-points, only one out of ten crowdsourcers (11%) could successfully pass our qualifying test; and even if they had passed it, half of them (56%) would have been kicked out in the middle of the task.",
"The third line is the overall accuracy on gold set from all crowdsourcers (excluding those who did not pass the qualifying test), which drops from 67% to 37% when annotating end-end relations.",
"The last line is the average response time per annotation and we can see that it takes much longer to label an end-end TempRel (52s) than a start-start TempRel (33s).",
"This important discovery indicates that the TempRels between end-points is probably governed by a different linguistic phenomenon.",
"Table 2 : Annotations involving the end-points of events are found to be much harder than only comparing the start-points.",
"We hypothesize that the difficulty is a mixture of how durative events are expressed (by authors) and perceived (by readers) in natural language.",
"In cognitive psychology, Coll-Florit and Gennari (2011) discovered that human readers take longer to perceive durative events than punctual events, e.g., owe 50 bucks vs. lost 50 bucks.",
"From the writer's standpoint, durations are usually fuzzy (Schockaert and De Cock, 2008) , or assumed to be a prior knowledge of readers (e.g., college takes 4 years and watching an NBA game takes a few hours), and thus not always written explicitly.",
"Given all these reasons, we ignore the comparison of end-points in this work, although event duration is indeed, another important task.",
"Annotation Scheme Design To summarize, with the proposed multi-axis modeling (Sec.",
"2) and interval splitting (Sec.",
"3), our annotation scheme is two-step.",
"First, we mark every event candidate as being temporally Anchorable or not (based on the time axis we are working on).",
"Second, we adopt the dense annotation scheme to label TempRels only between Anchorable events.",
"Note that we only work on verb events in this paper, so non-verb event candidates are also deleted in a preprocessing step.",
"We design crowdsourcing tasks for both steps and as we show later, high crowdsourcing quality was achieved on both tasks.",
"In this section, we will discuss some practical issues.",
"Quality Control for Crowdsourcing We take advantage of the quality control feature in CrowdFlower in our crowdsourcing jobs.",
"For any job, a set of examples are annotated by experts beforehand, which is considered gold and will serve two purposes.",
"(i) Qualifying test: Any crowdsourcer who wants to work on this job has to pass with 70% accuracy on 10 questions randomly selected from the gold set.",
"(ii) Surviving test: During the annotation process, questions from the gold set will be randomly given to crowdsourcers without notice, and one has to maintain 70% accuracy on the gold set till the end of the annotation; otherwise, he or she will be forbidden from working on this job anymore and all his/her annotations will be discarded.",
"At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision.",
"Vague Relations How to handle vague relations is another issue in temporal annotation.",
"In non-dense schemes, annotators usually skip the annotation of a vague pair.",
"In dense schemes, a majority agreement rule is applied as a postprocessing step to back off a decision to vague when annotators cannot pass a majority vote , which reminds us that annotators often label a vague relation as non-vague due to lack of thinking.",
"We decide to proactively reduce the possibility of such situations.",
"As mentioned earlier, our label set for t 1 start vs. t 2 start is before, after, equal and vague.",
"We ask two questions: Q1=Is it possible that t 1 start is before t 2 start ?",
"Q2=Is it possible that t 2 start is before t 1 start ?",
"Let the an-swers be A1 and A2.",
"Then we have a oneto-one mapping as follows: A1=A2=yes7 !vague, A1=A2=no7 !equal, A1=yes, A2=no7 !before, and A1=no, A2=yes7 !after.",
"An advantage is that one will be prompted to think about all possibilities, thus reducing the chance of overlook.",
"Finally, the annotation interface we used is shown in Appendix C. Corpus Statistics and Quality In this section, we first focus on annotations on the main axis, which is usually the primary storyline and thus has most events.",
"Before launching the crowdsourcing tasks, we checked the IAA between two experts on a subset of TB-Dense (about 100 events and 400 relations).",
"A Cohen's Kappa of .85 was achieved in the first step: anchorability annotation.",
"Only those events that both experts labeled Anchorable were kept before they moved onto the second step: relation annotation, for which the Cohen's Kappa was .90 for Q1 and .87 for Q2.",
"Table 3 furthermore shows the distribution, Cohen's Kappa, and F 1 of each label.",
"We can see the Kappa and F 1 of vague (=.75, F 1 =.81) are generally lower than those of the other labels, confirming that temporal vagueness is a more difficult semantic phenomenon.",
"Nevertheless, the overall IAA shown in Table 3 is a significant improvement compared to existing datasets.",
"With the improved IAA confirmed by experts, we sequentially launched the two-step crowdsourcing tasks through CrowdFlower on top of the same 36 documents of TB-Dense.",
"To evaluate how well the crowdsourcers performed on our task, we calculate two quality metrics: accuracy on the gold set and the Worker Agreement with Aggregate (WAWA).",
"WAWA indicates the average number of crowdsourcers' responses agreed with the aggregate answer (we used majority aggregation for each question).",
"For example, if N individual responses were obtained in total, and n of them were correct when compared to the aggregate answer, then WAWA is simply n/N .",
"In the first step, crowdsourcers labeled 28% of the events as Non-Anchorable to the main axis, with an accuracy on the gold of .86 and a WAWA of .79.",
"With Non-Anchorable events filtered, the relation annotation step was launched as another crowdsourcing task.",
"The label distribution is b=.",
"50, a=.28, e=.03, and v=.19 (consistent with Table 3 ).",
"In Table 4 , we show the annotation quality of this step using accuracy on the gold set and WAWA.",
"We can see that the crowdsourcers achieved a very good performance on the gold set, indicating that they are consistent with the authors who created the gold set; these crowdsourcers also achieved a high-level agreement under the WAWA metric, indicating that they are consistent among themselves.",
"These two metrics indicate that the annotation task is now well-defined and easy to understand even by non-experts.",
"Table 4 : Quality analysis of the relation annotation step of MATRES.",
"\"Q1\" and \"Q2\" refer to the two questions crowdsourcers were asked (see Sec.",
"4.2 for details).",
"Line 1 measures the level of consistency between crowdsourcers and the authors and line 2 measures the level of consistency among the crowdsourcers themselves.",
"We continued to annotate INTENTION and OPINION which create orthogonal branches on the main axis.",
"In the first step, crowdsourcers achieved an accuracy on gold of .82 and a WAWA of .89.",
"Since only 16% of the events are in this category and these axes are usually very short (e.g., allocate funds to build a museum.",
"), the annotation task is relatively small and two experts took the second step and achieved an agreement of .86 (F 1 ).",
"We name our new dataset MATRES for Multi-Axis Temporal RElations for Start-points.",
"Each individual judgement cost us $0.01 and MATRES in total cost about $400 for 36 documents.",
"Comparison to TB-Dense To get another checkpoint of the quality of the new dataset, we compare with the annotations of TB-Dense.",
"TB-Dense has 1.1K verb events, between which 3.4K event-event (EE) relations are annotated.",
"In the new dataset, 72% of the events (0.8K) are anchored onto the main axis, resulting in 1.6K EE relations, and 16% (0.2K) are anchored onto orthogonal axes, resulting in 0.2K EE relations.",
"The following comparison is based on the 1.8K EE relations in common.",
"Moreover, since TB-Dense annotations are for intervals instead of start-points only, we converted TB-Dense's interval relations to start-point relations (e.g., if A includes B, then t A start is before t B start ).",
"The confusion matrix is shown in Table 5 .",
"A few remarks about how to understand it: First, when TB-Dense labels before or after, MATRES also has a high-probability of having the same label (b=455/513=.89, a=309/438=.71); when MATRES labels vague, TB-Dense is also very likely to label vague (v=192/312=.62).",
"This indicates the high agreement level between the two datasets if the interval-or point-based annotation difference is ruled out.",
"Second, many vague relations in TB-Dense are labeled as before, after or equal in MATRES.",
"This is expected because TB-Dense annotates relations between intervals, while MATRES annotates start-points.",
"When durative events are involved, the problem usually becomes more difficult and interval-based annotation is more likely to label vague (see earlier discussions in Sec.",
"3).",
"Example 7 shows three typical cases, where e14:became, e17:backed, e18:rose and e19:extending can be considered durative.",
"If only their start-points are considered, the crowdsourcers were correct in labeling e14 before e15, e16 after e17, and e18 equal to e19, although TB-Dense says vague for all of them.",
"Third, equal seems to be the relation that the two dataset mostly disagree on, which is probably due to crowdsourcers' lack of understanding in time granularity and event coreference.",
"Although equal relations only constitutes a small portion in all relations, it needs further investigation.",
"Baseline System We develop a baseline system for TempRel extraction on MATRES, assuming that all the events and axes are given.",
"The following commonly-Example 7: Typical cases that TB-Dense annotated vague but MATRES annotated before, after, and equal, respectively.",
"At one point , when it (e14:became) clear controllers could not contact the plane, someone (e15:said) a prayer.",
"TB-Dense: vague; MATRES: before The US is bolstering its military presence in the gulf, as President Clinton (e16:discussed) the Iraq crisis with the one ally who has (e17:backed) his threat of force, British prime minister Tony Blair.",
"TB-Dense: vague; MATRES: after Average hourly earnings of nonsupervisory employees (e18:rose) to $12.51.",
"The gain left wages 3.8 percent higher than a year earlier, (e19:extending) a trend that has given back to workers some of the earning power they lost to inflation in the last decade.",
"TB-Dense: vague; MATRES: equal used features for each event pair are used: (i) The part-of-speech (POS) tags of each individual event and of its neighboring three words.",
"(ii) The sentence and token distance between the two events.",
"(iii) The appearance of any modal verb between the two event mentions in text (i.e., will, would, can, could, may and might).",
"(iv) The appearance of any temporal connectives between the two event mentions (e.g., before, after and since).",
"(v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998) .",
"(vi) Whether the input event mentions have a common derivational form derived from WordNet.",
"(vii) The head words of the preposition phrases that cover each event, respectively.",
"And (viii) event properties such as Aspect, Modality, and Polarity that come with the TimeBank dataset and are commonly used as features.",
"The proposed baseline system uses the averaged perceptron algorithm to classify the relation between each event pair into one of the four relation types.",
"We adopted the same train/dev/test split of TB-Dense, where there are 22 documents in train, 5 in dev, and 9 in test.",
"Parameters were tuned on the train-set to maximize its F 1 on the dev-set, after which the classifier was retrained on the union of train and dev.",
"A detailed analysis of the baseline system is provided in Table 6 .",
"The performance on equal and vague is lower than on before and after, probably due to shortage in these labels in the training data and the inherent difficulty in event coreference and temporal vagueness.",
"We can see, though, that the overall performance on MATRES is much better than those in the literature for TempRel extraction, which used to be in the low 50's Ning et al., 2017 ).",
"The same system was also retrained and tested on the original annotations of TB-Dense (Line \"Original\"), which confirms the significant improvement if the proposed annotation scheme is used.",
"Note that we do not mean to say that the proposed baseline system itself is better than other existing algorithms, but rather that the proposed annotation scheme and the resulting dataset lead to better defined machine learning tasks.",
"In the future, more data can be collected and used with advanced techniques such as ILP (Do et al., 2012) , structured learning (Ning et al., 2017) or multi-sieve .",
"Table 6 : Performance of the proposed baseline system on MATRES.",
"Line \"Original\" is the same system retrained on the original TB-Dense and tested on the same subset of event pairs.",
"Due to the limited number of equal examples, the system did not make any equal predictions on the testset.",
"Conclusion This paper proposes a new scheme for TempRel annotation between events, simplifying the task by focusing on a single time axis at a time.",
"We have also identified that end-points of events is a major source of confusion during annotation due to reasons beyond the scope of TempRel annotation, and proposed to focus on start-points only and handle the end-points issue in further investigation (e.g., in event duration annotation tasks).",
"Pilot study by expert annotators shows significant IAA improvements compared to literature values, indicating a better task definition under the proposed scheme.",
"This further enables the usage of crowdsourcing to collect a new dataset, MATRES, at a lower time cost.",
"Analysis shows that MATRES, albeit crowdsourced, has achieved a reasonably good agreement level, as confirmed by its performance on the gold set (agreement with the authors), the WAWA metric (agreement with the crowdsourcers themselves), and consistency with TB-Dense (agreement with an existing dataset).",
"Given the fact that existing schemes suffer from low IAAs and lack of data, we hope that the findings in this work would provide a good start towards understanding more sophisticated semantic phenomena in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.3.1",
"2.3.2",
"2.3.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Temporal Structure of Events",
"Motivation",
"Multi-Axis Modeling",
"Comparisons with Existing Work",
"Axis Projection",
"Introduction of the Orthogonal Axes",
"Differences from Factuality",
"Interval Splitting",
"Ambiguity of End-Points",
"Annotation Scheme Design",
"Quality Control for Crowdsourcing",
"Vague Relations",
"Corpus Statistics and Quality",
"Comparison to TB-Dense",
"Baseline System",
"Conclusion"
]
} | GEM-SciDuet-train-123#paper-1336#slide-9 | Intention vs actuality | Identifying intention can be done locally, while identifying actuality often depends on other events.
I called the police to report the body. Yes Yes
I called the police to report the body, but the line was busy.
Police came to restore order. Yes Yes
Police came to restore order, but 51 people were killed. | Identifying intention can be done locally, while identifying actuality often depends on other events.
I called the police to report the body. Yes Yes
I called the police to report the body, but the line was busy.
Police came to restore order. Yes Yes
Police came to restore order, but 51 people were killed. | [] |
GEM-SciDuet-train-123#paper-1336#slide-10 | 1336 | A Multi-Axis Annotation Scheme for Event Temporal Relations | Existing temporal relation (TempRel) annotation schemes often have low interannotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition. This paper proposes a new multi-axis modeling to better capture the temporal structure of events. In addition, we identify that event end-points are a major source of confusion in annotation, so we also propose to annotate TempRels based on start-points only. A pilot expert annotation effort using the proposed scheme shows significant improvement in IAA from the conventional 60's to 80's (Cohen's Kappa). This better-defined annotation scheme further enables the use of crowdsourcing to alleviate the labor intensity for each annotator. We hope that this work can foster more interesting studies towards event understanding. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259
],
"paper_content_text": [
"Introduction Temporal relation (TempRel) extraction is an important task for event understanding, and it has drawn much attention in the natural language processing (NLP) community recently (UzZaman et al., 2013; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015 Bethard et al., , 2016 Bethard et al., , 2017 Leeuwenberg and Moens, 2017; Ning et al., 2017 Ning et al., , 2018a .",
"Initiated by TimeBank (TB) (Pustejovsky et al., 2003b) , a number of TempRel datasets have been collected, including but not limited to the verbclause augmentation to TB (Bethard et al., 2007) , TempEval1-3 (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , TimeBank-Dense (TB-Dense) , EventTimeCorpus (Reimers et al., 2016) , and datasets with both temporal and other types of relations (e.g., coreference and causality) such as CaTeRs (Mostafazadeh et al., 2016) and RED (O'Gorman et al., 2016) .",
"These datasets were annotated by experts, but most still suffered from low inter-annotator agreements (IAA).",
"For instance, the IAAs of TB-Dense, RED and THYME-TimeML (Styler IV et al., 2014) were only below or near 60% (given that events are already annotated).",
"Since a low IAA usually indicates that the task is difficult even for humans (see Examples 1-3), the community has been looking into ways to simplify the task, by reducing the label set, and by breaking up the overall, complex task into subtasks (e.g., getting agreement on which event pairs should have a relation, and then what that relation should be) (Mostafazadeh et al., 2016; O'Gorman et al., 2016) .",
"In contrast to other existing datasets, Bethard et al.",
"(2007) achieved an agreement as high as 90%, but the scope of its annotation was narrowed down to a very special verb-clause structure.",
"(e1, e2), (e3, e4), and (e5, e6): TempRels that are difficult even for humans.",
"Note that only relevant events are highlighted here.",
"Example 1: Serbian police tried to eliminate the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Example 2: Service industries (e3:showed) solid job gains, as did manufacturers, two areas expected to be hardest (e4:hit) when the effects of the Asian crisis hit the American economy.",
"Example 3: We will act again if we have evidence he is (e5:rebuilding) his weapons of mass destruction capabilities, senior officials say.",
"In a bit of television diplomacy, Iraq's deputy foreign minister (e6:responded) from Baghdad in less than one hour, saying that .",
".",
".",
"This paper proposes a new approach to handling these issues in TempRel annotation.",
"First, we introduce multi-axis modeling to represent the temporal structure of events, based on which we anchor events to different semantic axes; only events from the same axis will then be temporally compared (Sec.",
"2).",
"As explained later, those event pairs in Examples 1-3 are difficult because they represent different semantic phenomena and belong to different axes.",
"Second, while we represent an event pair using two time intervals (say, [t 1 start , t 1 end ] and [t 2 start , t 2 end ]), we suggest that comparisons involving end-points (e.g., t 1 end vs. t 2 end ) are typically more difficult than comparing start-points (i.e., t 1 start vs. t 2 start ); we attribute this to the ambiguity of expressing and perceiving durations of events (Coll-Florit and Gennari, 2011) .",
"We believe that this is an important consideration, and we propose in Sec.",
"3 that TempRel annotation should focus on start-points.",
"Using the proposed annotation scheme, a pilot study done by experts achieved a high IAA of .84 (Cohen's Kappa) on a subset of TB-Dense, in contrast to the conventional 60's.",
"In addition to the low IAA issue, TempRel annotation is also known to be labor intensive.",
"Our third contribution is that we facilitate, for the first time, the use of crowdsourcing to collect a new, high quality (under multiple metrics explained later) TempRel dataset.",
"We explain how the crowdsourcing quality was controlled and how vague relations were handled in Sec.",
"4, and present some statistics and the quality of the new dataset in Sec.",
"5.",
"A baseline system is also shown to achieve much better performance on the new dataset, when compared with system performance in the literature (Sec.",
"6).",
"The paper's results are very encouraging and hopefully, this work would significantly benefit research in this area.",
"Temporal Structure of Events Given a set of events, one important question in designing the TempRel annotation task is: which pairs of events should have a relation?",
"The answer to it depends on the modeling of the overall temporal structure of events.",
"Motivation TimeBank (Pustejovsky et al., 2003b) laid the foundation for many later TempRel corpora, e.g., (Bethard et al., 2007; UzZaman et al., 2013; Cas-sidy et al., 2014) .",
"2 In TimeBank, the annotators were allowed to label TempRels between any pairs of events.",
"This setup models the overall structure of events using a general graph, which made annotators inadvertently overlook some pairs, resulting in low IAAs and many false negatives.",
"Example 4: Dense Annotation Scheme.",
"Serbian police (e7:tried) to (e8:eliminate) the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Given 4 NON-GENERIC events above, the dense scheme presents 6 pairs to annotators one by one: (e7, e8), (e7, e1), (e7, e2), (e8, e1), (e8, e2), and (e1, e2).",
"Apparently, not all pairs are well-defined, e.g., (e8, e2) and (e1, e2), but annotators are forced to label all of them.",
"To address this issue, proposed a dense annotation scheme, TB-Dense, which annotates all event pairs within a sliding, two-sentence window (see Example 4).",
"It requires all TempRels between GENERIC 3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NON-GENERIC and the other one for the GENERIC.",
"However, as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts.",
"It is reasonable to argue that the attempt to e1:restore order happened before the conflict where 51 people were e2:killed; or, 51 people had been killed but order had not been restored yet, so e1:restore is after e2:killed.",
"Similarly, in Example 2, service industries and manufacturers were originally expected to be hardest e4:hit but actually e3:showed gains, so e4:hit is before e3:showed; however, one can also argue that the two areas had showed gains but had not been hit, so e4:hit is after e3:showed.",
"Again, e5:rebuilding is a hypothetical event: \"we will act if rebuilding is true\".",
"Readers do not know for sure if \"he is already rebuilding weapons but we have no evidence\", or \"he will be building weapons in the future\", so annotators may disagree on the relation between e5:rebuilding and e6:responded.",
"Despite, importantly, minimizing missing annota-2 EventTimeCorpus (Reimers et al., 2016) is based on TimeBank, but aims at anchoring events onto explicit time expressions in each document rather than annotating TempRels between events, which can be a good complementary to other TempRel datasets.",
"3 For example, lions eat meat is GENERIC.",
"tions, the current dense scheme forces annotators to label many such ill-defined pairs, resulting in low IAA.",
"Multi-Axis Modeling Arguably, an ideal annotator may figure out the above ambiguity by him/herself and mark them as vague, but it is not a feasible requirement for all annotators to stay clear-headed for hours; let alone crowdsourcers.",
"What makes things worse is that, after annotators spend a long time figuring out these difficult cases, whether they disagree with each other or agree on the vagueness, the final decisions for such cases will still be vague.",
"As another way to handle this dilemma, TB-Dense resorted to a 80% confidence rule: annotators were allowed to choose a label if one is 80% sure that it was the writer's intent.",
"However, as pointed out by TB-Dense, annotators are likely to have rather different understandings of 80% confidence and it will still end up with disagreements.",
"In contrast to these annotation difficulties, humans can easily grasp the meaning of news articles, implying a potential gap between the difficulty of the annotation task and the one of understanding the actual meaning of the text.",
"In Examples 1-3, the writers did not intend to explain the TempRels between those pairs, and the original annotators of TimeBank 4 did not label relations between those pairs either, which indicates that both writers and readers did not think the TempRels between these pairs were crucial.",
"Instead, what is crucial in these examples is that \"Serbian police tried to restore order but killed 51 people\", that \"two areas were expected to be hit but showed gains\", and that \"if he rebuilds weapons then we will act.\"",
"To \"restore order\", to be \"hardest hit\", and \"if he was rebuilding\" were only the intention of police, the opinion of economists, and the condition to act, respectively, and whether or not they actually happen is not the focus of those writers.",
"This discussion suggests that a single axis is too restrictive to represent the complex structure of NON-GENERIC events.",
"Instead, we need a modeling which is more restrictive than a general graph so that annotators can focus on relation annotation (rather than looking for pairs first), but also more flexible than a single axis so that ill-defined 4 Recall that they were given the entire article and only salient relations would be annotated.",
"relations are not forcibly annotated.",
"Specifically, we need axes for intentions, opinions, hypotheses, etc.",
"in addition to the main axis of an article.",
"We thus argue for multi-axis modeling, as defined in Table 1 .",
"Following the proposed modeling, Examples 1-3 can be represented as in Fig.",
"1 .",
"This modeling aims at capturing what the author has explicitly expressed and it only asks annotators to look at comparable pairs, rather than forcing them to make decisions on often vaguely defined pairs.",
"In practice, we annotate one axis at a time: we first classify if an event is anchorable onto a given axis (this is also called the anchorability annotation step); then we annotate every pair of anchorable events (i.e., the relation annotation step); finally, we can move to another axis and repeat the two steps above.",
"Note that ruling out cross-axis relations is only a strategy we adopt in this paper to separate well-defined relations from ill-defined relations.",
"We do not claim that cross-axis relations are unimportant; instead, as shown in Fig.",
"2 , we think that cross-axis relations are a different semantic phenomenon that requires additional investigation.",
"Comparisons with Existing Work There have been other proposals of temporal structure modelings (Bramsen et al., 2006; Bethard et al., 2012) , but in general, the semantic phenomena handled in our work are very different and complementary to them.",
"(Bramsen et al., 2006) introduces \"temporal segments\" (a fragment of text that does not exhibit abrupt changes) in the medical domain.",
"Similarly, their temporal segments can also be considered as a special temporal structure modeling.",
"But a key difference is that (Bramsen et al., 2006) only annotates inter-segment relations, ignoring intra-segment ones.",
"Since those segments are usually large chunks of text, the semantics handled in (Bramsen et al., 2006) is in a very coarse granularity (as pointed out by (Bramsen et al., 2006) ) and is thus different from ours.",
"(Bethard et al., 2012) proposes a tree structure for children's stories, which \"typically have simpler temporal structures\", as they pointed out.",
"Moreover, in their annotation, an event can only be linked to a single nearby event, even if multiple nearby events may exist, whereas we do not have such restrictions.",
"In addition, some of the semantic phenomena in Table 1 have been discussed in existing work.",
"Here we compare with them for a better positioning of the proposed scheme.",
"Axis Projection TB-Dense handled the incomparability between main-axis events and HYPOTHESIS/NEGATION by treating an event as having occurred if the event is HYPOTHESIS/NEGATION.",
"5 In our multiaxis modeling, the strategy adopted by TB-Dense falls into a more general approach, \"axis projection\".",
"That is, projecting events across different axes to handle the incomparability between any two axes (not limited to HYPOTHE-SIS/NEGATION).",
"Axis projection works well for certain event pairs like Asian crisis and e4:hardest hit in Example 2: as in Fig.",
"1 , Asian crisis is before expected, which is again before e4:hardest hit, so Asian crisis is before e4:hardest hit.",
"Generally, however, since there is no direct evidence that can guide the projection, annotators may have different projections (imagine projecting e5:rebuilding onto the main axis: is it in the past or in the future?).",
"As a result, axis projec-tion requires many specially designed guidelines or strong external knowledge.",
"Annotators have to rigidly follow the sometimes counter-intuitive guidelines or \"guess\" a label instead of looking for evidence in the text.",
"When strong external knowledge is involved in axis projection, it becomes a reasoning process and the resulting relations are a different type.",
"For example, a reader may reason that in Example 3, it is well-known that they did \"act again\", implying his e5:rebuilding had happened and is before e6:responded.",
"Another example is in Fig.",
"2 .",
"It is obvious that relations based on these projections are not the same with and more challenging than those same-axis relations, so in the current stage, we should focus on same-axis relations only.",
"tended the conference, the projection of submit a paper onto the main axis is clearly before attended.",
"However, this projection requires strong external knowledge that a paper should be submitted before attending a conference.",
"Again, this projection is only a guess based on our external knowledge and it is still open whether the paper is submitted or not.",
"Introduction of the Orthogonal Axes Another prominent difference to earlier work is the introduction of orthogonal axes, which has not been used in any existing work as we know.",
"A special property is that the intersection event of two axes can be compared to events from both, which can sometimes bridge events, e.g., in Fig.",
"1 , Asian crisis is seemingly before hardest hit due to their connections to expected.",
"Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well.",
"However, the \"hardest hit\" in \"Asian crisis before hardest hit\" is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true.",
"Nevertheless, OPINIONS are not always true and INTENTIONS are not always fulfilled.",
"In Example 5, e9:sponsoring and e10:resolve are the opinions of the West and the speaker, respectively; whether or not they are true depends on the au-thors' implications or the readers' understandings, which is often beyond the scope of TempRel annotation.",
"6 Example 6 demonstrates a similar situation for INTENTIONS: when reading the sentence of e11:report, people are inclined to believe that it is fulfilled.",
"But if we read the sentence of e12:report, we have reason to believe that it is not.",
"When it comes to e13:tell, it is unclear if everyone told the truth.",
"The existence of such examples indicates that orthogonal axes are a better modeling for INTENTIONS and OPINIONS.",
"Example 5: Opinion events may not always be true.",
"He is ostracized by the West for (e9:sponsoring) terrorism.",
"We need to (e10:resolve) the deep-seated causes that have resulted in these problems.",
"Example 6: Intentions may not always be fulfilled.",
"A passerby called the police to (e11:report) the body.",
"A passerby called the police to (e12:report) the body.",
"Unfortunately, the line was busy.",
"I asked everyone to (e13:tell) the truth.",
"Differences from Factuality Event modality have been discussed in many existing event annotation schemes, e.g., Event Nugget (Mitamura et al., 2015) , Rich ERE , and RED.",
"Generally, an event is classified as Actual or Non-Actual, a.k.a.",
"factuality (Saurí and Pustejovsky, 2009; Lee et al., 2015) .",
"The main-axis events defined in this paper seem to be very similar to Actual events, but with several important differences: First, future events are Non-Actual because they indeed have not happened, but they may be on the main axis.",
"Second, events that are not on the main axis can also be Actual events, e.g., intentions that are fulfilled, or opinions that are true.",
"Third, as demonstrated by Examples 5-6, identifying anchorability as defined in Table 1 is relatively easy, but judging if an event actually happened is often a high-level understanding task that requires an understanding of the entire document or external knowledge.",
"Interested readers are referred to Appendix B for a detailed analysis of the difference between Anchorable (onto the main axis) and Actual on a subset of RED.",
"Interval Splitting All existing annotation schemes adopt the interval representation of events (Allen, 1984) and there are 13 relations between two intervals (for readers who are not familiar with it, please see Fig.",
"4 in the appendix).",
"To reduce the burden of annotators, existing schemes often resort to a reduced set of the 13 relations.",
"For instance, Verhagen et al.",
"(2007) Let [t 1 start , t 1 end ] and [t 2 start , t 2 end ] be the time intervals of two events (with the implicit assumption that t start t end ).",
"Instead of reducing the relations between two intervals, we try to explicitly compare the time points (see Fig.",
"3 ).",
"In this way, the label set is simply before, after and equal, 7 while the expressivity remains the same.",
"This interval splitting technique has also been used in (Raghavan et al., 2012) .",
"Figure 3 : The comparison of two event time intervals, [t 1 start , t 1 end ] and [t 2 start , t 2 end ], can be decomposed into four comparisons t 1 start vs. t 2 start , t 1 start vs. t 2 end , t 1 end vs. t 2 start , and t 1 end vs. t 2 end , without loss of generality.",
"[!",
"\"#$%# & , !",
"()* & ] [!",
"\"#$%# + , !",
"()* + ] time In addition to same expressivity, interval splitting can provide even more information when the relation between two events is vague.",
"In the conventional setting, imagine that the annotators find that the relation between two events can be either before or before and overlap.",
"Then the resulting annotation will have to be vague, although the annotators actually agree on the relation between t 1 start and t 2 start .",
"Using interval splitting, however, such information can be preserved.",
"An obvious downside of interval splitting is the increased number of annotations needed (4 point comparisons vs. 1 interval comparison).",
"In practice, however, it is usually much fewer than 4 comparisons.",
"For example, when we see t 1 end < t 2 start (as in Fig.",
"3) , the other three can be skipped because they can all be inferred.",
"Moreover, although the number of annotations is increased, the work load for human annotators may still be the same, because even in the conventional scheme, they still need to think of the relations between start-and end-points before they can make a decision.",
"Ambiguity of End-Points During our pilot annotation, the annotation quality dropped significantly when the annotators needed to reason about relations involving end-points of events.",
"Table 2 shows four metrics of task difficulty when only t 1 start vs. t 2 start or t 1 end vs. t 2 end are annotated.",
"Non-anchorable events were removed for both jobs.",
"The first two metrics, qualifying pass rate and survival rate are related to the two quality control protocols (see Sec.",
"4.1 for details).",
"We can see that when annotating the relations between end-points, only one out of ten crowdsourcers (11%) could successfully pass our qualifying test; and even if they had passed it, half of them (56%) would have been kicked out in the middle of the task.",
"The third line is the overall accuracy on gold set from all crowdsourcers (excluding those who did not pass the qualifying test), which drops from 67% to 37% when annotating end-end relations.",
"The last line is the average response time per annotation and we can see that it takes much longer to label an end-end TempRel (52s) than a start-start TempRel (33s).",
"This important discovery indicates that the TempRels between end-points is probably governed by a different linguistic phenomenon.",
"Table 2 : Annotations involving the end-points of events are found to be much harder than only comparing the start-points.",
"We hypothesize that the difficulty is a mixture of how durative events are expressed (by authors) and perceived (by readers) in natural language.",
"In cognitive psychology, Coll-Florit and Gennari (2011) discovered that human readers take longer to perceive durative events than punctual events, e.g., owe 50 bucks vs. lost 50 bucks.",
"From the writer's standpoint, durations are usually fuzzy (Schockaert and De Cock, 2008) , or assumed to be a prior knowledge of readers (e.g., college takes 4 years and watching an NBA game takes a few hours), and thus not always written explicitly.",
"Given all these reasons, we ignore the comparison of end-points in this work, although event duration is indeed, another important task.",
"Annotation Scheme Design To summarize, with the proposed multi-axis modeling (Sec.",
"2) and interval splitting (Sec.",
"3), our annotation scheme is two-step.",
"First, we mark every event candidate as being temporally Anchorable or not (based on the time axis we are working on).",
"Second, we adopt the dense annotation scheme to label TempRels only between Anchorable events.",
"Note that we only work on verb events in this paper, so non-verb event candidates are also deleted in a preprocessing step.",
"We design crowdsourcing tasks for both steps and as we show later, high crowdsourcing quality was achieved on both tasks.",
"In this section, we will discuss some practical issues.",
"Quality Control for Crowdsourcing We take advantage of the quality control feature in CrowdFlower in our crowdsourcing jobs.",
"For any job, a set of examples are annotated by experts beforehand, which is considered gold and will serve two purposes.",
"(i) Qualifying test: Any crowdsourcer who wants to work on this job has to pass with 70% accuracy on 10 questions randomly selected from the gold set.",
"(ii) Surviving test: During the annotation process, questions from the gold set will be randomly given to crowdsourcers without notice, and one has to maintain 70% accuracy on the gold set till the end of the annotation; otherwise, he or she will be forbidden from working on this job anymore and all his/her annotations will be discarded.",
"At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision.",
"Vague Relations How to handle vague relations is another issue in temporal annotation.",
"In non-dense schemes, annotators usually skip the annotation of a vague pair.",
"In dense schemes, a majority agreement rule is applied as a postprocessing step to back off a decision to vague when annotators cannot pass a majority vote , which reminds us that annotators often label a vague relation as non-vague due to lack of thinking.",
"We decide to proactively reduce the possibility of such situations.",
"As mentioned earlier, our label set for t 1 start vs. t 2 start is before, after, equal and vague.",
"We ask two questions: Q1=Is it possible that t 1 start is before t 2 start ?",
"Q2=Is it possible that t 2 start is before t 1 start ?",
"Let the an-swers be A1 and A2.",
"Then we have a oneto-one mapping as follows: A1=A2=yes7 !vague, A1=A2=no7 !equal, A1=yes, A2=no7 !before, and A1=no, A2=yes7 !after.",
"An advantage is that one will be prompted to think about all possibilities, thus reducing the chance of overlook.",
"Finally, the annotation interface we used is shown in Appendix C. Corpus Statistics and Quality In this section, we first focus on annotations on the main axis, which is usually the primary storyline and thus has most events.",
"Before launching the crowdsourcing tasks, we checked the IAA between two experts on a subset of TB-Dense (about 100 events and 400 relations).",
"A Cohen's Kappa of .85 was achieved in the first step: anchorability annotation.",
"Only those events that both experts labeled Anchorable were kept before they moved onto the second step: relation annotation, for which the Cohen's Kappa was .90 for Q1 and .87 for Q2.",
"Table 3 furthermore shows the distribution, Cohen's Kappa, and F 1 of each label.",
"We can see the Kappa and F 1 of vague (=.75, F 1 =.81) are generally lower than those of the other labels, confirming that temporal vagueness is a more difficult semantic phenomenon.",
"Nevertheless, the overall IAA shown in Table 3 is a significant improvement compared to existing datasets.",
"With the improved IAA confirmed by experts, we sequentially launched the two-step crowdsourcing tasks through CrowdFlower on top of the same 36 documents of TB-Dense.",
"To evaluate how well the crowdsourcers performed on our task, we calculate two quality metrics: accuracy on the gold set and the Worker Agreement with Aggregate (WAWA).",
"WAWA indicates the average number of crowdsourcers' responses agreed with the aggregate answer (we used majority aggregation for each question).",
"For example, if N individual responses were obtained in total, and n of them were correct when compared to the aggregate answer, then WAWA is simply n/N .",
"In the first step, crowdsourcers labeled 28% of the events as Non-Anchorable to the main axis, with an accuracy on the gold of .86 and a WAWA of .79.",
"With Non-Anchorable events filtered, the relation annotation step was launched as another crowdsourcing task.",
"The label distribution is b=.",
"50, a=.28, e=.03, and v=.19 (consistent with Table 3 ).",
"In Table 4 , we show the annotation quality of this step using accuracy on the gold set and WAWA.",
"We can see that the crowdsourcers achieved a very good performance on the gold set, indicating that they are consistent with the authors who created the gold set; these crowdsourcers also achieved a high-level agreement under the WAWA metric, indicating that they are consistent among themselves.",
"These two metrics indicate that the annotation task is now well-defined and easy to understand even by non-experts.",
"Table 4 : Quality analysis of the relation annotation step of MATRES.",
"\"Q1\" and \"Q2\" refer to the two questions crowdsourcers were asked (see Sec.",
"4.2 for details).",
"Line 1 measures the level of consistency between crowdsourcers and the authors and line 2 measures the level of consistency among the crowdsourcers themselves.",
"We continued to annotate INTENTION and OPINION which create orthogonal branches on the main axis.",
"In the first step, crowdsourcers achieved an accuracy on gold of .82 and a WAWA of .89.",
"Since only 16% of the events are in this category and these axes are usually very short (e.g., allocate funds to build a museum.",
"), the annotation task is relatively small and two experts took the second step and achieved an agreement of .86 (F 1 ).",
"We name our new dataset MATRES for Multi-Axis Temporal RElations for Start-points.",
"Each individual judgement cost us $0.01 and MATRES in total cost about $400 for 36 documents.",
"Comparison to TB-Dense To get another checkpoint of the quality of the new dataset, we compare with the annotations of TB-Dense.",
"TB-Dense has 1.1K verb events, between which 3.4K event-event (EE) relations are annotated.",
"In the new dataset, 72% of the events (0.8K) are anchored onto the main axis, resulting in 1.6K EE relations, and 16% (0.2K) are anchored onto orthogonal axes, resulting in 0.2K EE relations.",
"The following comparison is based on the 1.8K EE relations in common.",
"Moreover, since TB-Dense annotations are for intervals instead of start-points only, we converted TB-Dense's interval relations to start-point relations (e.g., if A includes B, then t A start is before t B start ).",
"The confusion matrix is shown in Table 5 .",
"A few remarks about how to understand it: First, when TB-Dense labels before or after, MATRES also has a high-probability of having the same label (b=455/513=.89, a=309/438=.71); when MATRES labels vague, TB-Dense is also very likely to label vague (v=192/312=.62).",
"This indicates the high agreement level between the two datasets if the interval-or point-based annotation difference is ruled out.",
"Second, many vague relations in TB-Dense are labeled as before, after or equal in MATRES.",
"This is expected because TB-Dense annotates relations between intervals, while MATRES annotates start-points.",
"When durative events are involved, the problem usually becomes more difficult and interval-based annotation is more likely to label vague (see earlier discussions in Sec.",
"3).",
"Example 7 shows three typical cases, where e14:became, e17:backed, e18:rose and e19:extending can be considered durative.",
"If only their start-points are considered, the crowdsourcers were correct in labeling e14 before e15, e16 after e17, and e18 equal to e19, although TB-Dense says vague for all of them.",
"Third, equal seems to be the relation that the two dataset mostly disagree on, which is probably due to crowdsourcers' lack of understanding in time granularity and event coreference.",
"Although equal relations only constitutes a small portion in all relations, it needs further investigation.",
"Baseline System We develop a baseline system for TempRel extraction on MATRES, assuming that all the events and axes are given.",
"The following commonly-Example 7: Typical cases that TB-Dense annotated vague but MATRES annotated before, after, and equal, respectively.",
"At one point , when it (e14:became) clear controllers could not contact the plane, someone (e15:said) a prayer.",
"TB-Dense: vague; MATRES: before The US is bolstering its military presence in the gulf, as President Clinton (e16:discussed) the Iraq crisis with the one ally who has (e17:backed) his threat of force, British prime minister Tony Blair.",
"TB-Dense: vague; MATRES: after Average hourly earnings of nonsupervisory employees (e18:rose) to $12.51.",
"The gain left wages 3.8 percent higher than a year earlier, (e19:extending) a trend that has given back to workers some of the earning power they lost to inflation in the last decade.",
"TB-Dense: vague; MATRES: equal used features for each event pair are used: (i) The part-of-speech (POS) tags of each individual event and of its neighboring three words.",
"(ii) The sentence and token distance between the two events.",
"(iii) The appearance of any modal verb between the two event mentions in text (i.e., will, would, can, could, may and might).",
"(iv) The appearance of any temporal connectives between the two event mentions (e.g., before, after and since).",
"(v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998) .",
"(vi) Whether the input event mentions have a common derivational form derived from WordNet.",
"(vii) The head words of the preposition phrases that cover each event, respectively.",
"And (viii) event properties such as Aspect, Modality, and Polarity that come with the TimeBank dataset and are commonly used as features.",
"The proposed baseline system uses the averaged perceptron algorithm to classify the relation between each event pair into one of the four relation types.",
"We adopted the same train/dev/test split of TB-Dense, where there are 22 documents in train, 5 in dev, and 9 in test.",
"Parameters were tuned on the train-set to maximize its F 1 on the dev-set, after which the classifier was retrained on the union of train and dev.",
"A detailed analysis of the baseline system is provided in Table 6 .",
"The performance on equal and vague is lower than on before and after, probably due to shortage in these labels in the training data and the inherent difficulty in event coreference and temporal vagueness.",
"We can see, though, that the overall performance on MATRES is much better than those in the literature for TempRel extraction, which used to be in the low 50's Ning et al., 2017 ).",
"The same system was also retrained and tested on the original annotations of TB-Dense (Line \"Original\"), which confirms the significant improvement if the proposed annotation scheme is used.",
"Note that we do not mean to say that the proposed baseline system itself is better than other existing algorithms, but rather that the proposed annotation scheme and the resulting dataset lead to better defined machine learning tasks.",
"In the future, more data can be collected and used with advanced techniques such as ILP (Do et al., 2012) , structured learning (Ning et al., 2017) or multi-sieve .",
"Table 6 : Performance of the proposed baseline system on MATRES.",
"Line \"Original\" is the same system retrained on the original TB-Dense and tested on the same subset of event pairs.",
"Due to the limited number of equal examples, the system did not make any equal predictions on the testset.",
"Conclusion This paper proposes a new scheme for TempRel annotation between events, simplifying the task by focusing on a single time axis at a time.",
"We have also identified that end-points of events is a major source of confusion during annotation due to reasons beyond the scope of TempRel annotation, and proposed to focus on start-points only and handle the end-points issue in further investigation (e.g., in event duration annotation tasks).",
"Pilot study by expert annotators shows significant IAA improvements compared to literature values, indicating a better task definition under the proposed scheme.",
"This further enables the usage of crowdsourcing to collect a new dataset, MATRES, at a lower time cost.",
"Analysis shows that MATRES, albeit crowdsourced, has achieved a reasonably good agreement level, as confirmed by its performance on the gold set (agreement with the authors), the WAWA metric (agreement with the crowdsourcers themselves), and consistency with TB-Dense (agreement with an existing dataset).",
"Given the fact that existing schemes suffer from low IAAs and lack of data, we hope that the findings in this work would provide a good start towards understanding more sophisticated semantic phenomena in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.3.1",
"2.3.2",
"2.3.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Temporal Structure of Events",
"Motivation",
"Multi-Axis Modeling",
"Comparisons with Existing Work",
"Axis Projection",
"Introduction of the Orthogonal Axes",
"Differences from Factuality",
"Interval Splitting",
"Ambiguity of End-Points",
"Annotation Scheme Design",
"Quality Control for Crowdsourcing",
"Vague Relations",
"Corpus Statistics and Quality",
"Comparison to TB-Dense",
"Baseline System",
"Conclusion"
]
} | GEM-SciDuet-train-123#paper-1336#slide-10 | 1 multi axis modeling a balance between two schemes | Our proposal: Multi-axis modeling balances the extreme schemes.
Allows dense modeling, but only within an axis.
Scheme 1: General graph modeling
A strong restriction on modeling
Relations are inevitably missed
Scheme 2: Chain modeling
Any pair is comparable
But many are confusing | Our proposal: Multi-axis modeling balances the extreme schemes.
Allows dense modeling, but only within an axis.
Scheme 1: General graph modeling
A strong restriction on modeling
Relations are inevitably missed
Scheme 2: Chain modeling
Any pair is comparable
But many are confusing | [] |
GEM-SciDuet-train-123#paper-1336#slide-11 | 1336 | A Multi-Axis Annotation Scheme for Event Temporal Relations | Existing temporal relation (TempRel) annotation schemes often have low interannotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition. This paper proposes a new multi-axis modeling to better capture the temporal structure of events. In addition, we identify that event end-points are a major source of confusion in annotation, so we also propose to annotate TempRels based on start-points only. A pilot expert annotation effort using the proposed scheme shows significant improvement in IAA from the conventional 60's to 80's (Cohen's Kappa). This better-defined annotation scheme further enables the use of crowdsourcing to alleviate the labor intensity for each annotator. We hope that this work can foster more interesting studies towards event understanding. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259
],
"paper_content_text": [
"Introduction Temporal relation (TempRel) extraction is an important task for event understanding, and it has drawn much attention in the natural language processing (NLP) community recently (UzZaman et al., 2013; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015 Bethard et al., , 2016 Bethard et al., , 2017 Leeuwenberg and Moens, 2017; Ning et al., 2017 Ning et al., , 2018a .",
"Initiated by TimeBank (TB) (Pustejovsky et al., 2003b) , a number of TempRel datasets have been collected, including but not limited to the verbclause augmentation to TB (Bethard et al., 2007) , TempEval1-3 (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , TimeBank-Dense (TB-Dense) , EventTimeCorpus (Reimers et al., 2016) , and datasets with both temporal and other types of relations (e.g., coreference and causality) such as CaTeRs (Mostafazadeh et al., 2016) and RED (O'Gorman et al., 2016) .",
"These datasets were annotated by experts, but most still suffered from low inter-annotator agreements (IAA).",
"For instance, the IAAs of TB-Dense, RED and THYME-TimeML (Styler IV et al., 2014) were only below or near 60% (given that events are already annotated).",
"Since a low IAA usually indicates that the task is difficult even for humans (see Examples 1-3), the community has been looking into ways to simplify the task, by reducing the label set, and by breaking up the overall, complex task into subtasks (e.g., getting agreement on which event pairs should have a relation, and then what that relation should be) (Mostafazadeh et al., 2016; O'Gorman et al., 2016) .",
"In contrast to other existing datasets, Bethard et al.",
"(2007) achieved an agreement as high as 90%, but the scope of its annotation was narrowed down to a very special verb-clause structure.",
"(e1, e2), (e3, e4), and (e5, e6): TempRels that are difficult even for humans.",
"Note that only relevant events are highlighted here.",
"Example 1: Serbian police tried to eliminate the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Example 2: Service industries (e3:showed) solid job gains, as did manufacturers, two areas expected to be hardest (e4:hit) when the effects of the Asian crisis hit the American economy.",
"Example 3: We will act again if we have evidence he is (e5:rebuilding) his weapons of mass destruction capabilities, senior officials say.",
"In a bit of television diplomacy, Iraq's deputy foreign minister (e6:responded) from Baghdad in less than one hour, saying that .",
".",
".",
"This paper proposes a new approach to handling these issues in TempRel annotation.",
"First, we introduce multi-axis modeling to represent the temporal structure of events, based on which we anchor events to different semantic axes; only events from the same axis will then be temporally compared (Sec.",
"2).",
"As explained later, those event pairs in Examples 1-3 are difficult because they represent different semantic phenomena and belong to different axes.",
"Second, while we represent an event pair using two time intervals (say, [t 1 start , t 1 end ] and [t 2 start , t 2 end ]), we suggest that comparisons involving end-points (e.g., t 1 end vs. t 2 end ) are typically more difficult than comparing start-points (i.e., t 1 start vs. t 2 start ); we attribute this to the ambiguity of expressing and perceiving durations of events (Coll-Florit and Gennari, 2011) .",
"We believe that this is an important consideration, and we propose in Sec.",
"3 that TempRel annotation should focus on start-points.",
"Using the proposed annotation scheme, a pilot study done by experts achieved a high IAA of .84 (Cohen's Kappa) on a subset of TB-Dense, in contrast to the conventional 60's.",
"In addition to the low IAA issue, TempRel annotation is also known to be labor intensive.",
"Our third contribution is that we facilitate, for the first time, the use of crowdsourcing to collect a new, high quality (under multiple metrics explained later) TempRel dataset.",
"We explain how the crowdsourcing quality was controlled and how vague relations were handled in Sec.",
"4, and present some statistics and the quality of the new dataset in Sec.",
"5.",
"A baseline system is also shown to achieve much better performance on the new dataset, when compared with system performance in the literature (Sec.",
"6).",
"The paper's results are very encouraging and hopefully, this work would significantly benefit research in this area.",
"Temporal Structure of Events Given a set of events, one important question in designing the TempRel annotation task is: which pairs of events should have a relation?",
"The answer to it depends on the modeling of the overall temporal structure of events.",
"Motivation TimeBank (Pustejovsky et al., 2003b) laid the foundation for many later TempRel corpora, e.g., (Bethard et al., 2007; UzZaman et al., 2013; Cas-sidy et al., 2014) .",
"2 In TimeBank, the annotators were allowed to label TempRels between any pairs of events.",
"This setup models the overall structure of events using a general graph, which made annotators inadvertently overlook some pairs, resulting in low IAAs and many false negatives.",
"Example 4: Dense Annotation Scheme.",
"Serbian police (e7:tried) to (e8:eliminate) the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Given 4 NON-GENERIC events above, the dense scheme presents 6 pairs to annotators one by one: (e7, e8), (e7, e1), (e7, e2), (e8, e1), (e8, e2), and (e1, e2).",
"Apparently, not all pairs are well-defined, e.g., (e8, e2) and (e1, e2), but annotators are forced to label all of them.",
"To address this issue, proposed a dense annotation scheme, TB-Dense, which annotates all event pairs within a sliding, two-sentence window (see Example 4).",
"It requires all TempRels between GENERIC 3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NON-GENERIC and the other one for the GENERIC.",
"However, as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts.",
"It is reasonable to argue that the attempt to e1:restore order happened before the conflict where 51 people were e2:killed; or, 51 people had been killed but order had not been restored yet, so e1:restore is after e2:killed.",
"Similarly, in Example 2, service industries and manufacturers were originally expected to be hardest e4:hit but actually e3:showed gains, so e4:hit is before e3:showed; however, one can also argue that the two areas had showed gains but had not been hit, so e4:hit is after e3:showed.",
"Again, e5:rebuilding is a hypothetical event: \"we will act if rebuilding is true\".",
"Readers do not know for sure if \"he is already rebuilding weapons but we have no evidence\", or \"he will be building weapons in the future\", so annotators may disagree on the relation between e5:rebuilding and e6:responded.",
"Despite, importantly, minimizing missing annota-2 EventTimeCorpus (Reimers et al., 2016) is based on TimeBank, but aims at anchoring events onto explicit time expressions in each document rather than annotating TempRels between events, which can be a good complementary to other TempRel datasets.",
"3 For example, lions eat meat is GENERIC.",
"tions, the current dense scheme forces annotators to label many such ill-defined pairs, resulting in low IAA.",
"Multi-Axis Modeling Arguably, an ideal annotator may figure out the above ambiguity by him/herself and mark them as vague, but it is not a feasible requirement for all annotators to stay clear-headed for hours; let alone crowdsourcers.",
"What makes things worse is that, after annotators spend a long time figuring out these difficult cases, whether they disagree with each other or agree on the vagueness, the final decisions for such cases will still be vague.",
"As another way to handle this dilemma, TB-Dense resorted to a 80% confidence rule: annotators were allowed to choose a label if one is 80% sure that it was the writer's intent.",
"However, as pointed out by TB-Dense, annotators are likely to have rather different understandings of 80% confidence and it will still end up with disagreements.",
"In contrast to these annotation difficulties, humans can easily grasp the meaning of news articles, implying a potential gap between the difficulty of the annotation task and the one of understanding the actual meaning of the text.",
"In Examples 1-3, the writers did not intend to explain the TempRels between those pairs, and the original annotators of TimeBank 4 did not label relations between those pairs either, which indicates that both writers and readers did not think the TempRels between these pairs were crucial.",
"Instead, what is crucial in these examples is that \"Serbian police tried to restore order but killed 51 people\", that \"two areas were expected to be hit but showed gains\", and that \"if he rebuilds weapons then we will act.\"",
"To \"restore order\", to be \"hardest hit\", and \"if he was rebuilding\" were only the intention of police, the opinion of economists, and the condition to act, respectively, and whether or not they actually happen is not the focus of those writers.",
"This discussion suggests that a single axis is too restrictive to represent the complex structure of NON-GENERIC events.",
"Instead, we need a modeling which is more restrictive than a general graph so that annotators can focus on relation annotation (rather than looking for pairs first), but also more flexible than a single axis so that ill-defined 4 Recall that they were given the entire article and only salient relations would be annotated.",
"relations are not forcibly annotated.",
"Specifically, we need axes for intentions, opinions, hypotheses, etc.",
"in addition to the main axis of an article.",
"We thus argue for multi-axis modeling, as defined in Table 1 .",
"Following the proposed modeling, Examples 1-3 can be represented as in Fig.",
"1 .",
"This modeling aims at capturing what the author has explicitly expressed and it only asks annotators to look at comparable pairs, rather than forcing them to make decisions on often vaguely defined pairs.",
"In practice, we annotate one axis at a time: we first classify if an event is anchorable onto a given axis (this is also called the anchorability annotation step); then we annotate every pair of anchorable events (i.e., the relation annotation step); finally, we can move to another axis and repeat the two steps above.",
"Note that ruling out cross-axis relations is only a strategy we adopt in this paper to separate well-defined relations from ill-defined relations.",
"We do not claim that cross-axis relations are unimportant; instead, as shown in Fig.",
"2 , we think that cross-axis relations are a different semantic phenomenon that requires additional investigation.",
"Comparisons with Existing Work There have been other proposals of temporal structure modelings (Bramsen et al., 2006; Bethard et al., 2012) , but in general, the semantic phenomena handled in our work are very different and complementary to them.",
"(Bramsen et al., 2006) introduces \"temporal segments\" (a fragment of text that does not exhibit abrupt changes) in the medical domain.",
"Similarly, their temporal segments can also be considered as a special temporal structure modeling.",
"But a key difference is that (Bramsen et al., 2006) only annotates inter-segment relations, ignoring intra-segment ones.",
"Since those segments are usually large chunks of text, the semantics handled in (Bramsen et al., 2006) is in a very coarse granularity (as pointed out by (Bramsen et al., 2006) ) and is thus different from ours.",
"(Bethard et al., 2012) proposes a tree structure for children's stories, which \"typically have simpler temporal structures\", as they pointed out.",
"Moreover, in their annotation, an event can only be linked to a single nearby event, even if multiple nearby events may exist, whereas we do not have such restrictions.",
"In addition, some of the semantic phenomena in Table 1 have been discussed in existing work.",
"Here we compare with them for a better positioning of the proposed scheme.",
"Axis Projection TB-Dense handled the incomparability between main-axis events and HYPOTHESIS/NEGATION by treating an event as having occurred if the event is HYPOTHESIS/NEGATION.",
"5 In our multiaxis modeling, the strategy adopted by TB-Dense falls into a more general approach, \"axis projection\".",
"That is, projecting events across different axes to handle the incomparability between any two axes (not limited to HYPOTHE-SIS/NEGATION).",
"Axis projection works well for certain event pairs like Asian crisis and e4:hardest hit in Example 2: as in Fig.",
"1 , Asian crisis is before expected, which is again before e4:hardest hit, so Asian crisis is before e4:hardest hit.",
"Generally, however, since there is no direct evidence that can guide the projection, annotators may have different projections (imagine projecting e5:rebuilding onto the main axis: is it in the past or in the future?).",
"As a result, axis projec-tion requires many specially designed guidelines or strong external knowledge.",
"Annotators have to rigidly follow the sometimes counter-intuitive guidelines or \"guess\" a label instead of looking for evidence in the text.",
"When strong external knowledge is involved in axis projection, it becomes a reasoning process and the resulting relations are a different type.",
"For example, a reader may reason that in Example 3, it is well-known that they did \"act again\", implying his e5:rebuilding had happened and is before e6:responded.",
"Another example is in Fig.",
"2 .",
"It is obvious that relations based on these projections are not the same with and more challenging than those same-axis relations, so in the current stage, we should focus on same-axis relations only.",
"tended the conference, the projection of submit a paper onto the main axis is clearly before attended.",
"However, this projection requires strong external knowledge that a paper should be submitted before attending a conference.",
"Again, this projection is only a guess based on our external knowledge and it is still open whether the paper is submitted or not.",
"Introduction of the Orthogonal Axes Another prominent difference to earlier work is the introduction of orthogonal axes, which has not been used in any existing work as we know.",
"A special property is that the intersection event of two axes can be compared to events from both, which can sometimes bridge events, e.g., in Fig.",
"1 , Asian crisis is seemingly before hardest hit due to their connections to expected.",
"Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well.",
"However, the \"hardest hit\" in \"Asian crisis before hardest hit\" is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true.",
"Nevertheless, OPINIONS are not always true and INTENTIONS are not always fulfilled.",
"In Example 5, e9:sponsoring and e10:resolve are the opinions of the West and the speaker, respectively; whether or not they are true depends on the au-thors' implications or the readers' understandings, which is often beyond the scope of TempRel annotation.",
"6 Example 6 demonstrates a similar situation for INTENTIONS: when reading the sentence of e11:report, people are inclined to believe that it is fulfilled.",
"But if we read the sentence of e12:report, we have reason to believe that it is not.",
"When it comes to e13:tell, it is unclear if everyone told the truth.",
"The existence of such examples indicates that orthogonal axes are a better modeling for INTENTIONS and OPINIONS.",
"Example 5: Opinion events may not always be true.",
"He is ostracized by the West for (e9:sponsoring) terrorism.",
"We need to (e10:resolve) the deep-seated causes that have resulted in these problems.",
"Example 6: Intentions may not always be fulfilled.",
"A passerby called the police to (e11:report) the body.",
"A passerby called the police to (e12:report) the body.",
"Unfortunately, the line was busy.",
"I asked everyone to (e13:tell) the truth.",
"Differences from Factuality Event modality have been discussed in many existing event annotation schemes, e.g., Event Nugget (Mitamura et al., 2015) , Rich ERE , and RED.",
"Generally, an event is classified as Actual or Non-Actual, a.k.a.",
"factuality (Saurí and Pustejovsky, 2009; Lee et al., 2015) .",
"The main-axis events defined in this paper seem to be very similar to Actual events, but with several important differences: First, future events are Non-Actual because they indeed have not happened, but they may be on the main axis.",
"Second, events that are not on the main axis can also be Actual events, e.g., intentions that are fulfilled, or opinions that are true.",
"Third, as demonstrated by Examples 5-6, identifying anchorability as defined in Table 1 is relatively easy, but judging if an event actually happened is often a high-level understanding task that requires an understanding of the entire document or external knowledge.",
"Interested readers are referred to Appendix B for a detailed analysis of the difference between Anchorable (onto the main axis) and Actual on a subset of RED.",
"Interval Splitting All existing annotation schemes adopt the interval representation of events (Allen, 1984) and there are 13 relations between two intervals (for readers who are not familiar with it, please see Fig.",
"4 in the appendix).",
"To reduce the burden of annotators, existing schemes often resort to a reduced set of the 13 relations.",
"For instance, Verhagen et al.",
"(2007) Let [t 1 start , t 1 end ] and [t 2 start , t 2 end ] be the time intervals of two events (with the implicit assumption that t start t end ).",
"Instead of reducing the relations between two intervals, we try to explicitly compare the time points (see Fig.",
"3 ).",
"In this way, the label set is simply before, after and equal, 7 while the expressivity remains the same.",
"This interval splitting technique has also been used in (Raghavan et al., 2012) .",
"Figure 3 : The comparison of two event time intervals, [t 1 start , t 1 end ] and [t 2 start , t 2 end ], can be decomposed into four comparisons t 1 start vs. t 2 start , t 1 start vs. t 2 end , t 1 end vs. t 2 start , and t 1 end vs. t 2 end , without loss of generality.",
"[!",
"\"#$%# & , !",
"()* & ] [!",
"\"#$%# + , !",
"()* + ] time In addition to same expressivity, interval splitting can provide even more information when the relation between two events is vague.",
"In the conventional setting, imagine that the annotators find that the relation between two events can be either before or before and overlap.",
"Then the resulting annotation will have to be vague, although the annotators actually agree on the relation between t 1 start and t 2 start .",
"Using interval splitting, however, such information can be preserved.",
"An obvious downside of interval splitting is the increased number of annotations needed (4 point comparisons vs. 1 interval comparison).",
"In practice, however, it is usually much fewer than 4 comparisons.",
"For example, when we see t 1 end < t 2 start (as in Fig.",
"3) , the other three can be skipped because they can all be inferred.",
"Moreover, although the number of annotations is increased, the work load for human annotators may still be the same, because even in the conventional scheme, they still need to think of the relations between start-and end-points before they can make a decision.",
"Ambiguity of End-Points During our pilot annotation, the annotation quality dropped significantly when the annotators needed to reason about relations involving end-points of events.",
"Table 2 shows four metrics of task difficulty when only t 1 start vs. t 2 start or t 1 end vs. t 2 end are annotated.",
"Non-anchorable events were removed for both jobs.",
"The first two metrics, qualifying pass rate and survival rate are related to the two quality control protocols (see Sec.",
"4.1 for details).",
"We can see that when annotating the relations between end-points, only one out of ten crowdsourcers (11%) could successfully pass our qualifying test; and even if they had passed it, half of them (56%) would have been kicked out in the middle of the task.",
"The third line is the overall accuracy on gold set from all crowdsourcers (excluding those who did not pass the qualifying test), which drops from 67% to 37% when annotating end-end relations.",
"The last line is the average response time per annotation and we can see that it takes much longer to label an end-end TempRel (52s) than a start-start TempRel (33s).",
"This important discovery indicates that the TempRels between end-points is probably governed by a different linguistic phenomenon.",
"Table 2 : Annotations involving the end-points of events are found to be much harder than only comparing the start-points.",
"We hypothesize that the difficulty is a mixture of how durative events are expressed (by authors) and perceived (by readers) in natural language.",
"In cognitive psychology, Coll-Florit and Gennari (2011) discovered that human readers take longer to perceive durative events than punctual events, e.g., owe 50 bucks vs. lost 50 bucks.",
"From the writer's standpoint, durations are usually fuzzy (Schockaert and De Cock, 2008) , or assumed to be a prior knowledge of readers (e.g., college takes 4 years and watching an NBA game takes a few hours), and thus not always written explicitly.",
"Given all these reasons, we ignore the comparison of end-points in this work, although event duration is indeed, another important task.",
"Annotation Scheme Design To summarize, with the proposed multi-axis modeling (Sec.",
"2) and interval splitting (Sec.",
"3), our annotation scheme is two-step.",
"First, we mark every event candidate as being temporally Anchorable or not (based on the time axis we are working on).",
"Second, we adopt the dense annotation scheme to label TempRels only between Anchorable events.",
"Note that we only work on verb events in this paper, so non-verb event candidates are also deleted in a preprocessing step.",
"We design crowdsourcing tasks for both steps and as we show later, high crowdsourcing quality was achieved on both tasks.",
"In this section, we will discuss some practical issues.",
"Quality Control for Crowdsourcing We take advantage of the quality control feature in CrowdFlower in our crowdsourcing jobs.",
"For any job, a set of examples are annotated by experts beforehand, which is considered gold and will serve two purposes.",
"(i) Qualifying test: Any crowdsourcer who wants to work on this job has to pass with 70% accuracy on 10 questions randomly selected from the gold set.",
"(ii) Surviving test: During the annotation process, questions from the gold set will be randomly given to crowdsourcers without notice, and one has to maintain 70% accuracy on the gold set till the end of the annotation; otherwise, he or she will be forbidden from working on this job anymore and all his/her annotations will be discarded.",
"At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision.",
"Vague Relations How to handle vague relations is another issue in temporal annotation.",
"In non-dense schemes, annotators usually skip the annotation of a vague pair.",
"In dense schemes, a majority agreement rule is applied as a postprocessing step to back off a decision to vague when annotators cannot pass a majority vote , which reminds us that annotators often label a vague relation as non-vague due to lack of thinking.",
"We decide to proactively reduce the possibility of such situations.",
"As mentioned earlier, our label set for t 1 start vs. t 2 start is before, after, equal and vague.",
"We ask two questions: Q1=Is it possible that t 1 start is before t 2 start ?",
"Q2=Is it possible that t 2 start is before t 1 start ?",
"Let the an-swers be A1 and A2.",
"Then we have a oneto-one mapping as follows: A1=A2=yes7 !vague, A1=A2=no7 !equal, A1=yes, A2=no7 !before, and A1=no, A2=yes7 !after.",
"An advantage is that one will be prompted to think about all possibilities, thus reducing the chance of overlook.",
"Finally, the annotation interface we used is shown in Appendix C. Corpus Statistics and Quality In this section, we first focus on annotations on the main axis, which is usually the primary storyline and thus has most events.",
"Before launching the crowdsourcing tasks, we checked the IAA between two experts on a subset of TB-Dense (about 100 events and 400 relations).",
"A Cohen's Kappa of .85 was achieved in the first step: anchorability annotation.",
"Only those events that both experts labeled Anchorable were kept before they moved onto the second step: relation annotation, for which the Cohen's Kappa was .90 for Q1 and .87 for Q2.",
"Table 3 furthermore shows the distribution, Cohen's Kappa, and F 1 of each label.",
"We can see the Kappa and F 1 of vague (=.75, F 1 =.81) are generally lower than those of the other labels, confirming that temporal vagueness is a more difficult semantic phenomenon.",
"Nevertheless, the overall IAA shown in Table 3 is a significant improvement compared to existing datasets.",
"With the improved IAA confirmed by experts, we sequentially launched the two-step crowdsourcing tasks through CrowdFlower on top of the same 36 documents of TB-Dense.",
"To evaluate how well the crowdsourcers performed on our task, we calculate two quality metrics: accuracy on the gold set and the Worker Agreement with Aggregate (WAWA).",
"WAWA indicates the average number of crowdsourcers' responses agreed with the aggregate answer (we used majority aggregation for each question).",
"For example, if N individual responses were obtained in total, and n of them were correct when compared to the aggregate answer, then WAWA is simply n/N .",
"In the first step, crowdsourcers labeled 28% of the events as Non-Anchorable to the main axis, with an accuracy on the gold of .86 and a WAWA of .79.",
"With Non-Anchorable events filtered, the relation annotation step was launched as another crowdsourcing task.",
"The label distribution is b=.",
"50, a=.28, e=.03, and v=.19 (consistent with Table 3 ).",
"In Table 4 , we show the annotation quality of this step using accuracy on the gold set and WAWA.",
"We can see that the crowdsourcers achieved a very good performance on the gold set, indicating that they are consistent with the authors who created the gold set; these crowdsourcers also achieved a high-level agreement under the WAWA metric, indicating that they are consistent among themselves.",
"These two metrics indicate that the annotation task is now well-defined and easy to understand even by non-experts.",
"Table 4 : Quality analysis of the relation annotation step of MATRES.",
"\"Q1\" and \"Q2\" refer to the two questions crowdsourcers were asked (see Sec.",
"4.2 for details).",
"Line 1 measures the level of consistency between crowdsourcers and the authors and line 2 measures the level of consistency among the crowdsourcers themselves.",
"We continued to annotate INTENTION and OPINION which create orthogonal branches on the main axis.",
"In the first step, crowdsourcers achieved an accuracy on gold of .82 and a WAWA of .89.",
"Since only 16% of the events are in this category and these axes are usually very short (e.g., allocate funds to build a museum.",
"), the annotation task is relatively small and two experts took the second step and achieved an agreement of .86 (F 1 ).",
"We name our new dataset MATRES for Multi-Axis Temporal RElations for Start-points.",
"Each individual judgement cost us $0.01 and MATRES in total cost about $400 for 36 documents.",
"Comparison to TB-Dense To get another checkpoint of the quality of the new dataset, we compare with the annotations of TB-Dense.",
"TB-Dense has 1.1K verb events, between which 3.4K event-event (EE) relations are annotated.",
"In the new dataset, 72% of the events (0.8K) are anchored onto the main axis, resulting in 1.6K EE relations, and 16% (0.2K) are anchored onto orthogonal axes, resulting in 0.2K EE relations.",
"The following comparison is based on the 1.8K EE relations in common.",
"Moreover, since TB-Dense annotations are for intervals instead of start-points only, we converted TB-Dense's interval relations to start-point relations (e.g., if A includes B, then t A start is before t B start ).",
"The confusion matrix is shown in Table 5 .",
"A few remarks about how to understand it: First, when TB-Dense labels before or after, MATRES also has a high-probability of having the same label (b=455/513=.89, a=309/438=.71); when MATRES labels vague, TB-Dense is also very likely to label vague (v=192/312=.62).",
"This indicates the high agreement level between the two datasets if the interval-or point-based annotation difference is ruled out.",
"Second, many vague relations in TB-Dense are labeled as before, after or equal in MATRES.",
"This is expected because TB-Dense annotates relations between intervals, while MATRES annotates start-points.",
"When durative events are involved, the problem usually becomes more difficult and interval-based annotation is more likely to label vague (see earlier discussions in Sec.",
"3).",
"Example 7 shows three typical cases, where e14:became, e17:backed, e18:rose and e19:extending can be considered durative.",
"If only their start-points are considered, the crowdsourcers were correct in labeling e14 before e15, e16 after e17, and e18 equal to e19, although TB-Dense says vague for all of them.",
"Third, equal seems to be the relation that the two dataset mostly disagree on, which is probably due to crowdsourcers' lack of understanding in time granularity and event coreference.",
"Although equal relations only constitutes a small portion in all relations, it needs further investigation.",
"Baseline System We develop a baseline system for TempRel extraction on MATRES, assuming that all the events and axes are given.",
"The following commonly-Example 7: Typical cases that TB-Dense annotated vague but MATRES annotated before, after, and equal, respectively.",
"At one point , when it (e14:became) clear controllers could not contact the plane, someone (e15:said) a prayer.",
"TB-Dense: vague; MATRES: before The US is bolstering its military presence in the gulf, as President Clinton (e16:discussed) the Iraq crisis with the one ally who has (e17:backed) his threat of force, British prime minister Tony Blair.",
"TB-Dense: vague; MATRES: after Average hourly earnings of nonsupervisory employees (e18:rose) to $12.51.",
"The gain left wages 3.8 percent higher than a year earlier, (e19:extending) a trend that has given back to workers some of the earning power they lost to inflation in the last decade.",
"TB-Dense: vague; MATRES: equal used features for each event pair are used: (i) The part-of-speech (POS) tags of each individual event and of its neighboring three words.",
"(ii) The sentence and token distance between the two events.",
"(iii) The appearance of any modal verb between the two event mentions in text (i.e., will, would, can, could, may and might).",
"(iv) The appearance of any temporal connectives between the two event mentions (e.g., before, after and since).",
"(v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998) .",
"(vi) Whether the input event mentions have a common derivational form derived from WordNet.",
"(vii) The head words of the preposition phrases that cover each event, respectively.",
"And (viii) event properties such as Aspect, Modality, and Polarity that come with the TimeBank dataset and are commonly used as features.",
"The proposed baseline system uses the averaged perceptron algorithm to classify the relation between each event pair into one of the four relation types.",
"We adopted the same train/dev/test split of TB-Dense, where there are 22 documents in train, 5 in dev, and 9 in test.",
"Parameters were tuned on the train-set to maximize its F 1 on the dev-set, after which the classifier was retrained on the union of train and dev.",
"A detailed analysis of the baseline system is provided in Table 6 .",
"The performance on equal and vague is lower than on before and after, probably due to shortage in these labels in the training data and the inherent difficulty in event coreference and temporal vagueness.",
"We can see, though, that the overall performance on MATRES is much better than those in the literature for TempRel extraction, which used to be in the low 50's Ning et al., 2017 ).",
"The same system was also retrained and tested on the original annotations of TB-Dense (Line \"Original\"), which confirms the significant improvement if the proposed annotation scheme is used.",
"Note that we do not mean to say that the proposed baseline system itself is better than other existing algorithms, but rather that the proposed annotation scheme and the resulting dataset lead to better defined machine learning tasks.",
"In the future, more data can be collected and used with advanced techniques such as ILP (Do et al., 2012) , structured learning (Ning et al., 2017) or multi-sieve .",
"Table 6 : Performance of the proposed baseline system on MATRES.",
"Line \"Original\" is the same system retrained on the original TB-Dense and tested on the same subset of event pairs.",
"Due to the limited number of equal examples, the system did not make any equal predictions on the testset.",
"Conclusion This paper proposes a new scheme for TempRel annotation between events, simplifying the task by focusing on a single time axis at a time.",
"We have also identified that end-points of events is a major source of confusion during annotation due to reasons beyond the scope of TempRel annotation, and proposed to focus on start-points only and handle the end-points issue in further investigation (e.g., in event duration annotation tasks).",
"Pilot study by expert annotators shows significant IAA improvements compared to literature values, indicating a better task definition under the proposed scheme.",
"This further enables the usage of crowdsourcing to collect a new dataset, MATRES, at a lower time cost.",
"Analysis shows that MATRES, albeit crowdsourced, has achieved a reasonably good agreement level, as confirmed by its performance on the gold set (agreement with the authors), the WAWA metric (agreement with the crowdsourcers themselves), and consistency with TB-Dense (agreement with an existing dataset).",
"Given the fact that existing schemes suffer from low IAAs and lack of data, we hope that the findings in this work would provide a good start towards understanding more sophisticated semantic phenomena in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.3.1",
"2.3.2",
"2.3.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Temporal Structure of Events",
"Motivation",
"Multi-Axis Modeling",
"Comparisons with Existing Work",
"Axis Projection",
"Introduction of the Orthogonal Axes",
"Differences from Factuality",
"Interval Splitting",
"Ambiguity of End-Points",
"Annotation Scheme Design",
"Quality Control for Crowdsourcing",
"Vague Relations",
"Corpus Statistics and Quality",
"Comparison to TB-Dense",
"Baseline System",
"Conclusion"
]
} | GEM-SciDuet-train-123#paper-1336#slide-11 | Overview multi axis annotation scheme | Step 0: Given a document in raw text
Step 1: Annotate all the events
Step 2: Assign axis to each event (intention, hypothesis, )
Step 3: On each axis, perform a dense annotation scheme according to events start-points
In this paper, we use events provided by TempEval3, so we skipped Step 1.
Our second contribution is successfully using crowdsourcing for
Step 2 and Step 3, while maintaining a good quality. | Step 0: Given a document in raw text
Step 1: Annotate all the events
Step 2: Assign axis to each event (intention, hypothesis, )
Step 3: On each axis, perform a dense annotation scheme according to events start-points
In this paper, we use events provided by TempEval3, so we skipped Step 1.
Our second contribution is successfully using crowdsourcing for
Step 2 and Step 3, while maintaining a good quality. | [] |
GEM-SciDuet-train-123#paper-1336#slide-12 | 1336 | A Multi-Axis Annotation Scheme for Event Temporal Relations | Existing temporal relation (TempRel) annotation schemes often have low interannotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition. This paper proposes a new multi-axis modeling to better capture the temporal structure of events. In addition, we identify that event end-points are a major source of confusion in annotation, so we also propose to annotate TempRels based on start-points only. A pilot expert annotation effort using the proposed scheme shows significant improvement in IAA from the conventional 60's to 80's (Cohen's Kappa). This better-defined annotation scheme further enables the use of crowdsourcing to alleviate the labor intensity for each annotator. We hope that this work can foster more interesting studies towards event understanding. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259
],
"paper_content_text": [
"Introduction Temporal relation (TempRel) extraction is an important task for event understanding, and it has drawn much attention in the natural language processing (NLP) community recently (UzZaman et al., 2013; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015 Bethard et al., , 2016 Bethard et al., , 2017 Leeuwenberg and Moens, 2017; Ning et al., 2017 Ning et al., , 2018a .",
"Initiated by TimeBank (TB) (Pustejovsky et al., 2003b) , a number of TempRel datasets have been collected, including but not limited to the verbclause augmentation to TB (Bethard et al., 2007) , TempEval1-3 (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , TimeBank-Dense (TB-Dense) , EventTimeCorpus (Reimers et al., 2016) , and datasets with both temporal and other types of relations (e.g., coreference and causality) such as CaTeRs (Mostafazadeh et al., 2016) and RED (O'Gorman et al., 2016) .",
"These datasets were annotated by experts, but most still suffered from low inter-annotator agreements (IAA).",
"For instance, the IAAs of TB-Dense, RED and THYME-TimeML (Styler IV et al., 2014) were only below or near 60% (given that events are already annotated).",
"Since a low IAA usually indicates that the task is difficult even for humans (see Examples 1-3), the community has been looking into ways to simplify the task, by reducing the label set, and by breaking up the overall, complex task into subtasks (e.g., getting agreement on which event pairs should have a relation, and then what that relation should be) (Mostafazadeh et al., 2016; O'Gorman et al., 2016) .",
"In contrast to other existing datasets, Bethard et al.",
"(2007) achieved an agreement as high as 90%, but the scope of its annotation was narrowed down to a very special verb-clause structure.",
"(e1, e2), (e3, e4), and (e5, e6): TempRels that are difficult even for humans.",
"Note that only relevant events are highlighted here.",
"Example 1: Serbian police tried to eliminate the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Example 2: Service industries (e3:showed) solid job gains, as did manufacturers, two areas expected to be hardest (e4:hit) when the effects of the Asian crisis hit the American economy.",
"Example 3: We will act again if we have evidence he is (e5:rebuilding) his weapons of mass destruction capabilities, senior officials say.",
"In a bit of television diplomacy, Iraq's deputy foreign minister (e6:responded) from Baghdad in less than one hour, saying that .",
".",
".",
"This paper proposes a new approach to handling these issues in TempRel annotation.",
"First, we introduce multi-axis modeling to represent the temporal structure of events, based on which we anchor events to different semantic axes; only events from the same axis will then be temporally compared (Sec.",
"2).",
"As explained later, those event pairs in Examples 1-3 are difficult because they represent different semantic phenomena and belong to different axes.",
"Second, while we represent an event pair using two time intervals (say, [t 1 start , t 1 end ] and [t 2 start , t 2 end ]), we suggest that comparisons involving end-points (e.g., t 1 end vs. t 2 end ) are typically more difficult than comparing start-points (i.e., t 1 start vs. t 2 start ); we attribute this to the ambiguity of expressing and perceiving durations of events (Coll-Florit and Gennari, 2011) .",
"We believe that this is an important consideration, and we propose in Sec.",
"3 that TempRel annotation should focus on start-points.",
"Using the proposed annotation scheme, a pilot study done by experts achieved a high IAA of .84 (Cohen's Kappa) on a subset of TB-Dense, in contrast to the conventional 60's.",
"In addition to the low IAA issue, TempRel annotation is also known to be labor intensive.",
"Our third contribution is that we facilitate, for the first time, the use of crowdsourcing to collect a new, high quality (under multiple metrics explained later) TempRel dataset.",
"We explain how the crowdsourcing quality was controlled and how vague relations were handled in Sec.",
"4, and present some statistics and the quality of the new dataset in Sec.",
"5.",
"A baseline system is also shown to achieve much better performance on the new dataset, when compared with system performance in the literature (Sec.",
"6).",
"The paper's results are very encouraging and hopefully, this work would significantly benefit research in this area.",
"Temporal Structure of Events Given a set of events, one important question in designing the TempRel annotation task is: which pairs of events should have a relation?",
"The answer to it depends on the modeling of the overall temporal structure of events.",
"Motivation TimeBank (Pustejovsky et al., 2003b) laid the foundation for many later TempRel corpora, e.g., (Bethard et al., 2007; UzZaman et al., 2013; Cas-sidy et al., 2014) .",
"2 In TimeBank, the annotators were allowed to label TempRels between any pairs of events.",
"This setup models the overall structure of events using a general graph, which made annotators inadvertently overlook some pairs, resulting in low IAAs and many false negatives.",
"Example 4: Dense Annotation Scheme.",
"Serbian police (e7:tried) to (e8:eliminate) the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Given 4 NON-GENERIC events above, the dense scheme presents 6 pairs to annotators one by one: (e7, e8), (e7, e1), (e7, e2), (e8, e1), (e8, e2), and (e1, e2).",
"Apparently, not all pairs are well-defined, e.g., (e8, e2) and (e1, e2), but annotators are forced to label all of them.",
"To address this issue, proposed a dense annotation scheme, TB-Dense, which annotates all event pairs within a sliding, two-sentence window (see Example 4).",
"It requires all TempRels between GENERIC 3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NON-GENERIC and the other one for the GENERIC.",
"However, as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts.",
"It is reasonable to argue that the attempt to e1:restore order happened before the conflict where 51 people were e2:killed; or, 51 people had been killed but order had not been restored yet, so e1:restore is after e2:killed.",
"Similarly, in Example 2, service industries and manufacturers were originally expected to be hardest e4:hit but actually e3:showed gains, so e4:hit is before e3:showed; however, one can also argue that the two areas had showed gains but had not been hit, so e4:hit is after e3:showed.",
"Again, e5:rebuilding is a hypothetical event: \"we will act if rebuilding is true\".",
"Readers do not know for sure if \"he is already rebuilding weapons but we have no evidence\", or \"he will be building weapons in the future\", so annotators may disagree on the relation between e5:rebuilding and e6:responded.",
"Despite, importantly, minimizing missing annota-2 EventTimeCorpus (Reimers et al., 2016) is based on TimeBank, but aims at anchoring events onto explicit time expressions in each document rather than annotating TempRels between events, which can be a good complementary to other TempRel datasets.",
"3 For example, lions eat meat is GENERIC.",
"tions, the current dense scheme forces annotators to label many such ill-defined pairs, resulting in low IAA.",
"Multi-Axis Modeling Arguably, an ideal annotator may figure out the above ambiguity by him/herself and mark them as vague, but it is not a feasible requirement for all annotators to stay clear-headed for hours; let alone crowdsourcers.",
"What makes things worse is that, after annotators spend a long time figuring out these difficult cases, whether they disagree with each other or agree on the vagueness, the final decisions for such cases will still be vague.",
"As another way to handle this dilemma, TB-Dense resorted to a 80% confidence rule: annotators were allowed to choose a label if one is 80% sure that it was the writer's intent.",
"However, as pointed out by TB-Dense, annotators are likely to have rather different understandings of 80% confidence and it will still end up with disagreements.",
"In contrast to these annotation difficulties, humans can easily grasp the meaning of news articles, implying a potential gap between the difficulty of the annotation task and the one of understanding the actual meaning of the text.",
"In Examples 1-3, the writers did not intend to explain the TempRels between those pairs, and the original annotators of TimeBank 4 did not label relations between those pairs either, which indicates that both writers and readers did not think the TempRels between these pairs were crucial.",
"Instead, what is crucial in these examples is that \"Serbian police tried to restore order but killed 51 people\", that \"two areas were expected to be hit but showed gains\", and that \"if he rebuilds weapons then we will act.\"",
"To \"restore order\", to be \"hardest hit\", and \"if he was rebuilding\" were only the intention of police, the opinion of economists, and the condition to act, respectively, and whether or not they actually happen is not the focus of those writers.",
"This discussion suggests that a single axis is too restrictive to represent the complex structure of NON-GENERIC events.",
"Instead, we need a modeling which is more restrictive than a general graph so that annotators can focus on relation annotation (rather than looking for pairs first), but also more flexible than a single axis so that ill-defined 4 Recall that they were given the entire article and only salient relations would be annotated.",
"relations are not forcibly annotated.",
"Specifically, we need axes for intentions, opinions, hypotheses, etc.",
"in addition to the main axis of an article.",
"We thus argue for multi-axis modeling, as defined in Table 1 .",
"Following the proposed modeling, Examples 1-3 can be represented as in Fig.",
"1 .",
"This modeling aims at capturing what the author has explicitly expressed and it only asks annotators to look at comparable pairs, rather than forcing them to make decisions on often vaguely defined pairs.",
"In practice, we annotate one axis at a time: we first classify if an event is anchorable onto a given axis (this is also called the anchorability annotation step); then we annotate every pair of anchorable events (i.e., the relation annotation step); finally, we can move to another axis and repeat the two steps above.",
"Note that ruling out cross-axis relations is only a strategy we adopt in this paper to separate well-defined relations from ill-defined relations.",
"We do not claim that cross-axis relations are unimportant; instead, as shown in Fig.",
"2 , we think that cross-axis relations are a different semantic phenomenon that requires additional investigation.",
"Comparisons with Existing Work There have been other proposals of temporal structure modelings (Bramsen et al., 2006; Bethard et al., 2012) , but in general, the semantic phenomena handled in our work are very different and complementary to them.",
"(Bramsen et al., 2006) introduces \"temporal segments\" (a fragment of text that does not exhibit abrupt changes) in the medical domain.",
"Similarly, their temporal segments can also be considered as a special temporal structure modeling.",
"But a key difference is that (Bramsen et al., 2006) only annotates inter-segment relations, ignoring intra-segment ones.",
"Since those segments are usually large chunks of text, the semantics handled in (Bramsen et al., 2006) is in a very coarse granularity (as pointed out by (Bramsen et al., 2006) ) and is thus different from ours.",
"(Bethard et al., 2012) proposes a tree structure for children's stories, which \"typically have simpler temporal structures\", as they pointed out.",
"Moreover, in their annotation, an event can only be linked to a single nearby event, even if multiple nearby events may exist, whereas we do not have such restrictions.",
"In addition, some of the semantic phenomena in Table 1 have been discussed in existing work.",
"Here we compare with them for a better positioning of the proposed scheme.",
"Axis Projection TB-Dense handled the incomparability between main-axis events and HYPOTHESIS/NEGATION by treating an event as having occurred if the event is HYPOTHESIS/NEGATION.",
"5 In our multiaxis modeling, the strategy adopted by TB-Dense falls into a more general approach, \"axis projection\".",
"That is, projecting events across different axes to handle the incomparability between any two axes (not limited to HYPOTHE-SIS/NEGATION).",
"Axis projection works well for certain event pairs like Asian crisis and e4:hardest hit in Example 2: as in Fig.",
"1 , Asian crisis is before expected, which is again before e4:hardest hit, so Asian crisis is before e4:hardest hit.",
"Generally, however, since there is no direct evidence that can guide the projection, annotators may have different projections (imagine projecting e5:rebuilding onto the main axis: is it in the past or in the future?).",
"As a result, axis projec-tion requires many specially designed guidelines or strong external knowledge.",
"Annotators have to rigidly follow the sometimes counter-intuitive guidelines or \"guess\" a label instead of looking for evidence in the text.",
"When strong external knowledge is involved in axis projection, it becomes a reasoning process and the resulting relations are a different type.",
"For example, a reader may reason that in Example 3, it is well-known that they did \"act again\", implying his e5:rebuilding had happened and is before e6:responded.",
"Another example is in Fig.",
"2 .",
"It is obvious that relations based on these projections are not the same with and more challenging than those same-axis relations, so in the current stage, we should focus on same-axis relations only.",
"tended the conference, the projection of submit a paper onto the main axis is clearly before attended.",
"However, this projection requires strong external knowledge that a paper should be submitted before attending a conference.",
"Again, this projection is only a guess based on our external knowledge and it is still open whether the paper is submitted or not.",
"Introduction of the Orthogonal Axes Another prominent difference to earlier work is the introduction of orthogonal axes, which has not been used in any existing work as we know.",
"A special property is that the intersection event of two axes can be compared to events from both, which can sometimes bridge events, e.g., in Fig.",
"1 , Asian crisis is seemingly before hardest hit due to their connections to expected.",
"Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well.",
"However, the \"hardest hit\" in \"Asian crisis before hardest hit\" is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true.",
"Nevertheless, OPINIONS are not always true and INTENTIONS are not always fulfilled.",
"In Example 5, e9:sponsoring and e10:resolve are the opinions of the West and the speaker, respectively; whether or not they are true depends on the au-thors' implications or the readers' understandings, which is often beyond the scope of TempRel annotation.",
"6 Example 6 demonstrates a similar situation for INTENTIONS: when reading the sentence of e11:report, people are inclined to believe that it is fulfilled.",
"But if we read the sentence of e12:report, we have reason to believe that it is not.",
"When it comes to e13:tell, it is unclear if everyone told the truth.",
"The existence of such examples indicates that orthogonal axes are a better modeling for INTENTIONS and OPINIONS.",
"Example 5: Opinion events may not always be true.",
"He is ostracized by the West for (e9:sponsoring) terrorism.",
"We need to (e10:resolve) the deep-seated causes that have resulted in these problems.",
"Example 6: Intentions may not always be fulfilled.",
"A passerby called the police to (e11:report) the body.",
"A passerby called the police to (e12:report) the body.",
"Unfortunately, the line was busy.",
"I asked everyone to (e13:tell) the truth.",
"Differences from Factuality Event modality have been discussed in many existing event annotation schemes, e.g., Event Nugget (Mitamura et al., 2015) , Rich ERE , and RED.",
"Generally, an event is classified as Actual or Non-Actual, a.k.a.",
"factuality (Saurí and Pustejovsky, 2009; Lee et al., 2015) .",
"The main-axis events defined in this paper seem to be very similar to Actual events, but with several important differences: First, future events are Non-Actual because they indeed have not happened, but they may be on the main axis.",
"Second, events that are not on the main axis can also be Actual events, e.g., intentions that are fulfilled, or opinions that are true.",
"Third, as demonstrated by Examples 5-6, identifying anchorability as defined in Table 1 is relatively easy, but judging if an event actually happened is often a high-level understanding task that requires an understanding of the entire document or external knowledge.",
"Interested readers are referred to Appendix B for a detailed analysis of the difference between Anchorable (onto the main axis) and Actual on a subset of RED.",
"Interval Splitting All existing annotation schemes adopt the interval representation of events (Allen, 1984) and there are 13 relations between two intervals (for readers who are not familiar with it, please see Fig.",
"4 in the appendix).",
"To reduce the burden of annotators, existing schemes often resort to a reduced set of the 13 relations.",
"For instance, Verhagen et al.",
"(2007) Let [t 1 start , t 1 end ] and [t 2 start , t 2 end ] be the time intervals of two events (with the implicit assumption that t start t end ).",
"Instead of reducing the relations between two intervals, we try to explicitly compare the time points (see Fig.",
"3 ).",
"In this way, the label set is simply before, after and equal, 7 while the expressivity remains the same.",
"This interval splitting technique has also been used in (Raghavan et al., 2012) .",
"Figure 3 : The comparison of two event time intervals, [t 1 start , t 1 end ] and [t 2 start , t 2 end ], can be decomposed into four comparisons t 1 start vs. t 2 start , t 1 start vs. t 2 end , t 1 end vs. t 2 start , and t 1 end vs. t 2 end , without loss of generality.",
"[!",
"\"#$%# & , !",
"()* & ] [!",
"\"#$%# + , !",
"()* + ] time In addition to same expressivity, interval splitting can provide even more information when the relation between two events is vague.",
"In the conventional setting, imagine that the annotators find that the relation between two events can be either before or before and overlap.",
"Then the resulting annotation will have to be vague, although the annotators actually agree on the relation between t 1 start and t 2 start .",
"Using interval splitting, however, such information can be preserved.",
"An obvious downside of interval splitting is the increased number of annotations needed (4 point comparisons vs. 1 interval comparison).",
"In practice, however, it is usually much fewer than 4 comparisons.",
"For example, when we see t 1 end < t 2 start (as in Fig.",
"3) , the other three can be skipped because they can all be inferred.",
"Moreover, although the number of annotations is increased, the work load for human annotators may still be the same, because even in the conventional scheme, they still need to think of the relations between start-and end-points before they can make a decision.",
"Ambiguity of End-Points During our pilot annotation, the annotation quality dropped significantly when the annotators needed to reason about relations involving end-points of events.",
"Table 2 shows four metrics of task difficulty when only t 1 start vs. t 2 start or t 1 end vs. t 2 end are annotated.",
"Non-anchorable events were removed for both jobs.",
"The first two metrics, qualifying pass rate and survival rate are related to the two quality control protocols (see Sec.",
"4.1 for details).",
"We can see that when annotating the relations between end-points, only one out of ten crowdsourcers (11%) could successfully pass our qualifying test; and even if they had passed it, half of them (56%) would have been kicked out in the middle of the task.",
"The third line is the overall accuracy on gold set from all crowdsourcers (excluding those who did not pass the qualifying test), which drops from 67% to 37% when annotating end-end relations.",
"The last line is the average response time per annotation and we can see that it takes much longer to label an end-end TempRel (52s) than a start-start TempRel (33s).",
"This important discovery indicates that the TempRels between end-points is probably governed by a different linguistic phenomenon.",
"Table 2 : Annotations involving the end-points of events are found to be much harder than only comparing the start-points.",
"We hypothesize that the difficulty is a mixture of how durative events are expressed (by authors) and perceived (by readers) in natural language.",
"In cognitive psychology, Coll-Florit and Gennari (2011) discovered that human readers take longer to perceive durative events than punctual events, e.g., owe 50 bucks vs. lost 50 bucks.",
"From the writer's standpoint, durations are usually fuzzy (Schockaert and De Cock, 2008) , or assumed to be a prior knowledge of readers (e.g., college takes 4 years and watching an NBA game takes a few hours), and thus not always written explicitly.",
"Given all these reasons, we ignore the comparison of end-points in this work, although event duration is indeed, another important task.",
"Annotation Scheme Design To summarize, with the proposed multi-axis modeling (Sec.",
"2) and interval splitting (Sec.",
"3), our annotation scheme is two-step.",
"First, we mark every event candidate as being temporally Anchorable or not (based on the time axis we are working on).",
"Second, we adopt the dense annotation scheme to label TempRels only between Anchorable events.",
"Note that we only work on verb events in this paper, so non-verb event candidates are also deleted in a preprocessing step.",
"We design crowdsourcing tasks for both steps and as we show later, high crowdsourcing quality was achieved on both tasks.",
"In this section, we will discuss some practical issues.",
"Quality Control for Crowdsourcing We take advantage of the quality control feature in CrowdFlower in our crowdsourcing jobs.",
"For any job, a set of examples are annotated by experts beforehand, which is considered gold and will serve two purposes.",
"(i) Qualifying test: Any crowdsourcer who wants to work on this job has to pass with 70% accuracy on 10 questions randomly selected from the gold set.",
"(ii) Surviving test: During the annotation process, questions from the gold set will be randomly given to crowdsourcers without notice, and one has to maintain 70% accuracy on the gold set till the end of the annotation; otherwise, he or she will be forbidden from working on this job anymore and all his/her annotations will be discarded.",
"At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision.",
"Vague Relations How to handle vague relations is another issue in temporal annotation.",
"In non-dense schemes, annotators usually skip the annotation of a vague pair.",
"In dense schemes, a majority agreement rule is applied as a postprocessing step to back off a decision to vague when annotators cannot pass a majority vote , which reminds us that annotators often label a vague relation as non-vague due to lack of thinking.",
"We decide to proactively reduce the possibility of such situations.",
"As mentioned earlier, our label set for t 1 start vs. t 2 start is before, after, equal and vague.",
"We ask two questions: Q1=Is it possible that t 1 start is before t 2 start ?",
"Q2=Is it possible that t 2 start is before t 1 start ?",
"Let the an-swers be A1 and A2.",
"Then we have a oneto-one mapping as follows: A1=A2=yes7 !vague, A1=A2=no7 !equal, A1=yes, A2=no7 !before, and A1=no, A2=yes7 !after.",
"An advantage is that one will be prompted to think about all possibilities, thus reducing the chance of overlook.",
"Finally, the annotation interface we used is shown in Appendix C. Corpus Statistics and Quality In this section, we first focus on annotations on the main axis, which is usually the primary storyline and thus has most events.",
"Before launching the crowdsourcing tasks, we checked the IAA between two experts on a subset of TB-Dense (about 100 events and 400 relations).",
"A Cohen's Kappa of .85 was achieved in the first step: anchorability annotation.",
"Only those events that both experts labeled Anchorable were kept before they moved onto the second step: relation annotation, for which the Cohen's Kappa was .90 for Q1 and .87 for Q2.",
"Table 3 furthermore shows the distribution, Cohen's Kappa, and F 1 of each label.",
"We can see the Kappa and F 1 of vague (=.75, F 1 =.81) are generally lower than those of the other labels, confirming that temporal vagueness is a more difficult semantic phenomenon.",
"Nevertheless, the overall IAA shown in Table 3 is a significant improvement compared to existing datasets.",
"With the improved IAA confirmed by experts, we sequentially launched the two-step crowdsourcing tasks through CrowdFlower on top of the same 36 documents of TB-Dense.",
"To evaluate how well the crowdsourcers performed on our task, we calculate two quality metrics: accuracy on the gold set and the Worker Agreement with Aggregate (WAWA).",
"WAWA indicates the average number of crowdsourcers' responses agreed with the aggregate answer (we used majority aggregation for each question).",
"For example, if N individual responses were obtained in total, and n of them were correct when compared to the aggregate answer, then WAWA is simply n/N .",
"In the first step, crowdsourcers labeled 28% of the events as Non-Anchorable to the main axis, with an accuracy on the gold of .86 and a WAWA of .79.",
"With Non-Anchorable events filtered, the relation annotation step was launched as another crowdsourcing task.",
"The label distribution is b=.",
"50, a=.28, e=.03, and v=.19 (consistent with Table 3 ).",
"In Table 4 , we show the annotation quality of this step using accuracy on the gold set and WAWA.",
"We can see that the crowdsourcers achieved a very good performance on the gold set, indicating that they are consistent with the authors who created the gold set; these crowdsourcers also achieved a high-level agreement under the WAWA metric, indicating that they are consistent among themselves.",
"These two metrics indicate that the annotation task is now well-defined and easy to understand even by non-experts.",
"Table 4 : Quality analysis of the relation annotation step of MATRES.",
"\"Q1\" and \"Q2\" refer to the two questions crowdsourcers were asked (see Sec.",
"4.2 for details).",
"Line 1 measures the level of consistency between crowdsourcers and the authors and line 2 measures the level of consistency among the crowdsourcers themselves.",
"We continued to annotate INTENTION and OPINION which create orthogonal branches on the main axis.",
"In the first step, crowdsourcers achieved an accuracy on gold of .82 and a WAWA of .89.",
"Since only 16% of the events are in this category and these axes are usually very short (e.g., allocate funds to build a museum.",
"), the annotation task is relatively small and two experts took the second step and achieved an agreement of .86 (F 1 ).",
"We name our new dataset MATRES for Multi-Axis Temporal RElations for Start-points.",
"Each individual judgement cost us $0.01 and MATRES in total cost about $400 for 36 documents.",
"Comparison to TB-Dense To get another checkpoint of the quality of the new dataset, we compare with the annotations of TB-Dense.",
"TB-Dense has 1.1K verb events, between which 3.4K event-event (EE) relations are annotated.",
"In the new dataset, 72% of the events (0.8K) are anchored onto the main axis, resulting in 1.6K EE relations, and 16% (0.2K) are anchored onto orthogonal axes, resulting in 0.2K EE relations.",
"The following comparison is based on the 1.8K EE relations in common.",
"Moreover, since TB-Dense annotations are for intervals instead of start-points only, we converted TB-Dense's interval relations to start-point relations (e.g., if A includes B, then t A start is before t B start ).",
"The confusion matrix is shown in Table 5 .",
"A few remarks about how to understand it: First, when TB-Dense labels before or after, MATRES also has a high-probability of having the same label (b=455/513=.89, a=309/438=.71); when MATRES labels vague, TB-Dense is also very likely to label vague (v=192/312=.62).",
"This indicates the high agreement level between the two datasets if the interval-or point-based annotation difference is ruled out.",
"Second, many vague relations in TB-Dense are labeled as before, after or equal in MATRES.",
"This is expected because TB-Dense annotates relations between intervals, while MATRES annotates start-points.",
"When durative events are involved, the problem usually becomes more difficult and interval-based annotation is more likely to label vague (see earlier discussions in Sec.",
"3).",
"Example 7 shows three typical cases, where e14:became, e17:backed, e18:rose and e19:extending can be considered durative.",
"If only their start-points are considered, the crowdsourcers were correct in labeling e14 before e15, e16 after e17, and e18 equal to e19, although TB-Dense says vague for all of them.",
"Third, equal seems to be the relation that the two dataset mostly disagree on, which is probably due to crowdsourcers' lack of understanding in time granularity and event coreference.",
"Although equal relations only constitutes a small portion in all relations, it needs further investigation.",
"Baseline System We develop a baseline system for TempRel extraction on MATRES, assuming that all the events and axes are given.",
"The following commonly-Example 7: Typical cases that TB-Dense annotated vague but MATRES annotated before, after, and equal, respectively.",
"At one point , when it (e14:became) clear controllers could not contact the plane, someone (e15:said) a prayer.",
"TB-Dense: vague; MATRES: before The US is bolstering its military presence in the gulf, as President Clinton (e16:discussed) the Iraq crisis with the one ally who has (e17:backed) his threat of force, British prime minister Tony Blair.",
"TB-Dense: vague; MATRES: after Average hourly earnings of nonsupervisory employees (e18:rose) to $12.51.",
"The gain left wages 3.8 percent higher than a year earlier, (e19:extending) a trend that has given back to workers some of the earning power they lost to inflation in the last decade.",
"TB-Dense: vague; MATRES: equal used features for each event pair are used: (i) The part-of-speech (POS) tags of each individual event and of its neighboring three words.",
"(ii) The sentence and token distance between the two events.",
"(iii) The appearance of any modal verb between the two event mentions in text (i.e., will, would, can, could, may and might).",
"(iv) The appearance of any temporal connectives between the two event mentions (e.g., before, after and since).",
"(v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998) .",
"(vi) Whether the input event mentions have a common derivational form derived from WordNet.",
"(vii) The head words of the preposition phrases that cover each event, respectively.",
"And (viii) event properties such as Aspect, Modality, and Polarity that come with the TimeBank dataset and are commonly used as features.",
"The proposed baseline system uses the averaged perceptron algorithm to classify the relation between each event pair into one of the four relation types.",
"We adopted the same train/dev/test split of TB-Dense, where there are 22 documents in train, 5 in dev, and 9 in test.",
"Parameters were tuned on the train-set to maximize its F 1 on the dev-set, after which the classifier was retrained on the union of train and dev.",
"A detailed analysis of the baseline system is provided in Table 6 .",
"The performance on equal and vague is lower than on before and after, probably due to shortage in these labels in the training data and the inherent difficulty in event coreference and temporal vagueness.",
"We can see, though, that the overall performance on MATRES is much better than those in the literature for TempRel extraction, which used to be in the low 50's Ning et al., 2017 ).",
"The same system was also retrained and tested on the original annotations of TB-Dense (Line \"Original\"), which confirms the significant improvement if the proposed annotation scheme is used.",
"Note that we do not mean to say that the proposed baseline system itself is better than other existing algorithms, but rather that the proposed annotation scheme and the resulting dataset lead to better defined machine learning tasks.",
"In the future, more data can be collected and used with advanced techniques such as ILP (Do et al., 2012) , structured learning (Ning et al., 2017) or multi-sieve .",
"Table 6 : Performance of the proposed baseline system on MATRES.",
"Line \"Original\" is the same system retrained on the original TB-Dense and tested on the same subset of event pairs.",
"Due to the limited number of equal examples, the system did not make any equal predictions on the testset.",
"Conclusion This paper proposes a new scheme for TempRel annotation between events, simplifying the task by focusing on a single time axis at a time.",
"We have also identified that end-points of events is a major source of confusion during annotation due to reasons beyond the scope of TempRel annotation, and proposed to focus on start-points only and handle the end-points issue in further investigation (e.g., in event duration annotation tasks).",
"Pilot study by expert annotators shows significant IAA improvements compared to literature values, indicating a better task definition under the proposed scheme.",
"This further enables the usage of crowdsourcing to collect a new dataset, MATRES, at a lower time cost.",
"Analysis shows that MATRES, albeit crowdsourced, has achieved a reasonably good agreement level, as confirmed by its performance on the gold set (agreement with the authors), the WAWA metric (agreement with the crowdsourcers themselves), and consistency with TB-Dense (agreement with an existing dataset).",
"Given the fact that existing schemes suffer from low IAAs and lack of data, we hope that the findings in this work would provide a good start towards understanding more sophisticated semantic phenomena in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.3.1",
"2.3.2",
"2.3.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Temporal Structure of Events",
"Motivation",
"Multi-Axis Modeling",
"Comparisons with Existing Work",
"Axis Projection",
"Introduction of the Orthogonal Axes",
"Differences from Factuality",
"Interval Splitting",
"Ambiguity of End-Points",
"Annotation Scheme Design",
"Quality Control for Crowdsourcing",
"Vague Relations",
"Corpus Statistics and Quality",
"Comparison to TB-Dense",
"Baseline System",
"Conclusion"
]
} | GEM-SciDuet-train-123#paper-1336#slide-12 | 2 crowdsourcing | Annotation guidelines: Find at http://cogcomp.org/page/publication_view/834
Quality control: A gold set is annotated by experts beforehand.
Qualification: Before working on this task, one has to pass with 70% accuracy on sample gold questions.
Important: with the older task definition, annotators did not pass the qualification test.
Survival: During annotation, gold questions will be given to annotators without notice, and one has to maintain 70% accuracy; otherwise, one will be kicked out and all his/her annotations will be discarded.
Majority vote: At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision. | Annotation guidelines: Find at http://cogcomp.org/page/publication_view/834
Quality control: A gold set is annotated by experts beforehand.
Qualification: Before working on this task, one has to pass with 70% accuracy on sample gold questions.
Important: with the older task definition, annotators did not pass the qualification test.
Survival: During annotation, gold questions will be given to annotators without notice, and one has to maintain 70% accuracy; otherwise, one will be kicked out and all his/her annotations will be discarded.
Majority vote: At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision. | [] |
GEM-SciDuet-train-123#paper-1336#slide-13 | 1336 | A Multi-Axis Annotation Scheme for Event Temporal Relations | Existing temporal relation (TempRel) annotation schemes often have low interannotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition. This paper proposes a new multi-axis modeling to better capture the temporal structure of events. In addition, we identify that event end-points are a major source of confusion in annotation, so we also propose to annotate TempRels based on start-points only. A pilot expert annotation effort using the proposed scheme shows significant improvement in IAA from the conventional 60's to 80's (Cohen's Kappa). This better-defined annotation scheme further enables the use of crowdsourcing to alleviate the labor intensity for each annotator. We hope that this work can foster more interesting studies towards event understanding. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259
],
"paper_content_text": [
"Introduction Temporal relation (TempRel) extraction is an important task for event understanding, and it has drawn much attention in the natural language processing (NLP) community recently (UzZaman et al., 2013; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015 Bethard et al., , 2016 Bethard et al., , 2017 Leeuwenberg and Moens, 2017; Ning et al., 2017 Ning et al., , 2018a .",
"Initiated by TimeBank (TB) (Pustejovsky et al., 2003b) , a number of TempRel datasets have been collected, including but not limited to the verbclause augmentation to TB (Bethard et al., 2007) , TempEval1-3 (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , TimeBank-Dense (TB-Dense) , EventTimeCorpus (Reimers et al., 2016) , and datasets with both temporal and other types of relations (e.g., coreference and causality) such as CaTeRs (Mostafazadeh et al., 2016) and RED (O'Gorman et al., 2016) .",
"These datasets were annotated by experts, but most still suffered from low inter-annotator agreements (IAA).",
"For instance, the IAAs of TB-Dense, RED and THYME-TimeML (Styler IV et al., 2014) were only below or near 60% (given that events are already annotated).",
"Since a low IAA usually indicates that the task is difficult even for humans (see Examples 1-3), the community has been looking into ways to simplify the task, by reducing the label set, and by breaking up the overall, complex task into subtasks (e.g., getting agreement on which event pairs should have a relation, and then what that relation should be) (Mostafazadeh et al., 2016; O'Gorman et al., 2016) .",
"In contrast to other existing datasets, Bethard et al.",
"(2007) achieved an agreement as high as 90%, but the scope of its annotation was narrowed down to a very special verb-clause structure.",
"(e1, e2), (e3, e4), and (e5, e6): TempRels that are difficult even for humans.",
"Note that only relevant events are highlighted here.",
"Example 1: Serbian police tried to eliminate the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Example 2: Service industries (e3:showed) solid job gains, as did manufacturers, two areas expected to be hardest (e4:hit) when the effects of the Asian crisis hit the American economy.",
"Example 3: We will act again if we have evidence he is (e5:rebuilding) his weapons of mass destruction capabilities, senior officials say.",
"In a bit of television diplomacy, Iraq's deputy foreign minister (e6:responded) from Baghdad in less than one hour, saying that .",
".",
".",
"This paper proposes a new approach to handling these issues in TempRel annotation.",
"First, we introduce multi-axis modeling to represent the temporal structure of events, based on which we anchor events to different semantic axes; only events from the same axis will then be temporally compared (Sec.",
"2).",
"As explained later, those event pairs in Examples 1-3 are difficult because they represent different semantic phenomena and belong to different axes.",
"Second, while we represent an event pair using two time intervals (say, [t 1 start , t 1 end ] and [t 2 start , t 2 end ]), we suggest that comparisons involving end-points (e.g., t 1 end vs. t 2 end ) are typically more difficult than comparing start-points (i.e., t 1 start vs. t 2 start ); we attribute this to the ambiguity of expressing and perceiving durations of events (Coll-Florit and Gennari, 2011) .",
"We believe that this is an important consideration, and we propose in Sec.",
"3 that TempRel annotation should focus on start-points.",
"Using the proposed annotation scheme, a pilot study done by experts achieved a high IAA of .84 (Cohen's Kappa) on a subset of TB-Dense, in contrast to the conventional 60's.",
"In addition to the low IAA issue, TempRel annotation is also known to be labor intensive.",
"Our third contribution is that we facilitate, for the first time, the use of crowdsourcing to collect a new, high quality (under multiple metrics explained later) TempRel dataset.",
"We explain how the crowdsourcing quality was controlled and how vague relations were handled in Sec.",
"4, and present some statistics and the quality of the new dataset in Sec.",
"5.",
"A baseline system is also shown to achieve much better performance on the new dataset, when compared with system performance in the literature (Sec.",
"6).",
"The paper's results are very encouraging and hopefully, this work would significantly benefit research in this area.",
"Temporal Structure of Events Given a set of events, one important question in designing the TempRel annotation task is: which pairs of events should have a relation?",
"The answer to it depends on the modeling of the overall temporal structure of events.",
"Motivation TimeBank (Pustejovsky et al., 2003b) laid the foundation for many later TempRel corpora, e.g., (Bethard et al., 2007; UzZaman et al., 2013; Cas-sidy et al., 2014) .",
"2 In TimeBank, the annotators were allowed to label TempRels between any pairs of events.",
"This setup models the overall structure of events using a general graph, which made annotators inadvertently overlook some pairs, resulting in low IAAs and many false negatives.",
"Example 4: Dense Annotation Scheme.",
"Serbian police (e7:tried) to (e8:eliminate) the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Given 4 NON-GENERIC events above, the dense scheme presents 6 pairs to annotators one by one: (e7, e8), (e7, e1), (e7, e2), (e8, e1), (e8, e2), and (e1, e2).",
"Apparently, not all pairs are well-defined, e.g., (e8, e2) and (e1, e2), but annotators are forced to label all of them.",
"To address this issue, proposed a dense annotation scheme, TB-Dense, which annotates all event pairs within a sliding, two-sentence window (see Example 4).",
"It requires all TempRels between GENERIC 3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NON-GENERIC and the other one for the GENERIC.",
"However, as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts.",
"It is reasonable to argue that the attempt to e1:restore order happened before the conflict where 51 people were e2:killed; or, 51 people had been killed but order had not been restored yet, so e1:restore is after e2:killed.",
"Similarly, in Example 2, service industries and manufacturers were originally expected to be hardest e4:hit but actually e3:showed gains, so e4:hit is before e3:showed; however, one can also argue that the two areas had showed gains but had not been hit, so e4:hit is after e3:showed.",
"Again, e5:rebuilding is a hypothetical event: \"we will act if rebuilding is true\".",
"Readers do not know for sure if \"he is already rebuilding weapons but we have no evidence\", or \"he will be building weapons in the future\", so annotators may disagree on the relation between e5:rebuilding and e6:responded.",
"Despite, importantly, minimizing missing annota-2 EventTimeCorpus (Reimers et al., 2016) is based on TimeBank, but aims at anchoring events onto explicit time expressions in each document rather than annotating TempRels between events, which can be a good complementary to other TempRel datasets.",
"3 For example, lions eat meat is GENERIC.",
"tions, the current dense scheme forces annotators to label many such ill-defined pairs, resulting in low IAA.",
"Multi-Axis Modeling Arguably, an ideal annotator may figure out the above ambiguity by him/herself and mark them as vague, but it is not a feasible requirement for all annotators to stay clear-headed for hours; let alone crowdsourcers.",
"What makes things worse is that, after annotators spend a long time figuring out these difficult cases, whether they disagree with each other or agree on the vagueness, the final decisions for such cases will still be vague.",
"As another way to handle this dilemma, TB-Dense resorted to a 80% confidence rule: annotators were allowed to choose a label if one is 80% sure that it was the writer's intent.",
"However, as pointed out by TB-Dense, annotators are likely to have rather different understandings of 80% confidence and it will still end up with disagreements.",
"In contrast to these annotation difficulties, humans can easily grasp the meaning of news articles, implying a potential gap between the difficulty of the annotation task and the one of understanding the actual meaning of the text.",
"In Examples 1-3, the writers did not intend to explain the TempRels between those pairs, and the original annotators of TimeBank 4 did not label relations between those pairs either, which indicates that both writers and readers did not think the TempRels between these pairs were crucial.",
"Instead, what is crucial in these examples is that \"Serbian police tried to restore order but killed 51 people\", that \"two areas were expected to be hit but showed gains\", and that \"if he rebuilds weapons then we will act.\"",
"To \"restore order\", to be \"hardest hit\", and \"if he was rebuilding\" were only the intention of police, the opinion of economists, and the condition to act, respectively, and whether or not they actually happen is not the focus of those writers.",
"This discussion suggests that a single axis is too restrictive to represent the complex structure of NON-GENERIC events.",
"Instead, we need a modeling which is more restrictive than a general graph so that annotators can focus on relation annotation (rather than looking for pairs first), but also more flexible than a single axis so that ill-defined 4 Recall that they were given the entire article and only salient relations would be annotated.",
"relations are not forcibly annotated.",
"Specifically, we need axes for intentions, opinions, hypotheses, etc.",
"in addition to the main axis of an article.",
"We thus argue for multi-axis modeling, as defined in Table 1 .",
"Following the proposed modeling, Examples 1-3 can be represented as in Fig.",
"1 .",
"This modeling aims at capturing what the author has explicitly expressed and it only asks annotators to look at comparable pairs, rather than forcing them to make decisions on often vaguely defined pairs.",
"In practice, we annotate one axis at a time: we first classify if an event is anchorable onto a given axis (this is also called the anchorability annotation step); then we annotate every pair of anchorable events (i.e., the relation annotation step); finally, we can move to another axis and repeat the two steps above.",
"Note that ruling out cross-axis relations is only a strategy we adopt in this paper to separate well-defined relations from ill-defined relations.",
"We do not claim that cross-axis relations are unimportant; instead, as shown in Fig.",
"2 , we think that cross-axis relations are a different semantic phenomenon that requires additional investigation.",
"Comparisons with Existing Work There have been other proposals of temporal structure modelings (Bramsen et al., 2006; Bethard et al., 2012) , but in general, the semantic phenomena handled in our work are very different and complementary to them.",
"(Bramsen et al., 2006) introduces \"temporal segments\" (a fragment of text that does not exhibit abrupt changes) in the medical domain.",
"Similarly, their temporal segments can also be considered as a special temporal structure modeling.",
"But a key difference is that (Bramsen et al., 2006) only annotates inter-segment relations, ignoring intra-segment ones.",
"Since those segments are usually large chunks of text, the semantics handled in (Bramsen et al., 2006) is in a very coarse granularity (as pointed out by (Bramsen et al., 2006) ) and is thus different from ours.",
"(Bethard et al., 2012) proposes a tree structure for children's stories, which \"typically have simpler temporal structures\", as they pointed out.",
"Moreover, in their annotation, an event can only be linked to a single nearby event, even if multiple nearby events may exist, whereas we do not have such restrictions.",
"In addition, some of the semantic phenomena in Table 1 have been discussed in existing work.",
"Here we compare with them for a better positioning of the proposed scheme.",
"Axis Projection TB-Dense handled the incomparability between main-axis events and HYPOTHESIS/NEGATION by treating an event as having occurred if the event is HYPOTHESIS/NEGATION.",
"5 In our multiaxis modeling, the strategy adopted by TB-Dense falls into a more general approach, \"axis projection\".",
"That is, projecting events across different axes to handle the incomparability between any two axes (not limited to HYPOTHE-SIS/NEGATION).",
"Axis projection works well for certain event pairs like Asian crisis and e4:hardest hit in Example 2: as in Fig.",
"1 , Asian crisis is before expected, which is again before e4:hardest hit, so Asian crisis is before e4:hardest hit.",
"Generally, however, since there is no direct evidence that can guide the projection, annotators may have different projections (imagine projecting e5:rebuilding onto the main axis: is it in the past or in the future?).",
"As a result, axis projec-tion requires many specially designed guidelines or strong external knowledge.",
"Annotators have to rigidly follow the sometimes counter-intuitive guidelines or \"guess\" a label instead of looking for evidence in the text.",
"When strong external knowledge is involved in axis projection, it becomes a reasoning process and the resulting relations are a different type.",
"For example, a reader may reason that in Example 3, it is well-known that they did \"act again\", implying his e5:rebuilding had happened and is before e6:responded.",
"Another example is in Fig.",
"2 .",
"It is obvious that relations based on these projections are not the same with and more challenging than those same-axis relations, so in the current stage, we should focus on same-axis relations only.",
"tended the conference, the projection of submit a paper onto the main axis is clearly before attended.",
"However, this projection requires strong external knowledge that a paper should be submitted before attending a conference.",
"Again, this projection is only a guess based on our external knowledge and it is still open whether the paper is submitted or not.",
"Introduction of the Orthogonal Axes Another prominent difference to earlier work is the introduction of orthogonal axes, which has not been used in any existing work as we know.",
"A special property is that the intersection event of two axes can be compared to events from both, which can sometimes bridge events, e.g., in Fig.",
"1 , Asian crisis is seemingly before hardest hit due to their connections to expected.",
"Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well.",
"However, the \"hardest hit\" in \"Asian crisis before hardest hit\" is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true.",
"Nevertheless, OPINIONS are not always true and INTENTIONS are not always fulfilled.",
"In Example 5, e9:sponsoring and e10:resolve are the opinions of the West and the speaker, respectively; whether or not they are true depends on the au-thors' implications or the readers' understandings, which is often beyond the scope of TempRel annotation.",
"6 Example 6 demonstrates a similar situation for INTENTIONS: when reading the sentence of e11:report, people are inclined to believe that it is fulfilled.",
"But if we read the sentence of e12:report, we have reason to believe that it is not.",
"When it comes to e13:tell, it is unclear if everyone told the truth.",
"The existence of such examples indicates that orthogonal axes are a better modeling for INTENTIONS and OPINIONS.",
"Example 5: Opinion events may not always be true.",
"He is ostracized by the West for (e9:sponsoring) terrorism.",
"We need to (e10:resolve) the deep-seated causes that have resulted in these problems.",
"Example 6: Intentions may not always be fulfilled.",
"A passerby called the police to (e11:report) the body.",
"A passerby called the police to (e12:report) the body.",
"Unfortunately, the line was busy.",
"I asked everyone to (e13:tell) the truth.",
"Differences from Factuality Event modality have been discussed in many existing event annotation schemes, e.g., Event Nugget (Mitamura et al., 2015) , Rich ERE , and RED.",
"Generally, an event is classified as Actual or Non-Actual, a.k.a.",
"factuality (Saurí and Pustejovsky, 2009; Lee et al., 2015) .",
"The main-axis events defined in this paper seem to be very similar to Actual events, but with several important differences: First, future events are Non-Actual because they indeed have not happened, but they may be on the main axis.",
"Second, events that are not on the main axis can also be Actual events, e.g., intentions that are fulfilled, or opinions that are true.",
"Third, as demonstrated by Examples 5-6, identifying anchorability as defined in Table 1 is relatively easy, but judging if an event actually happened is often a high-level understanding task that requires an understanding of the entire document or external knowledge.",
"Interested readers are referred to Appendix B for a detailed analysis of the difference between Anchorable (onto the main axis) and Actual on a subset of RED.",
"Interval Splitting All existing annotation schemes adopt the interval representation of events (Allen, 1984) and there are 13 relations between two intervals (for readers who are not familiar with it, please see Fig.",
"4 in the appendix).",
"To reduce the burden of annotators, existing schemes often resort to a reduced set of the 13 relations.",
"For instance, Verhagen et al.",
"(2007) Let [t 1 start , t 1 end ] and [t 2 start , t 2 end ] be the time intervals of two events (with the implicit assumption that t start t end ).",
"Instead of reducing the relations between two intervals, we try to explicitly compare the time points (see Fig.",
"3 ).",
"In this way, the label set is simply before, after and equal, 7 while the expressivity remains the same.",
"This interval splitting technique has also been used in (Raghavan et al., 2012) .",
"Figure 3 : The comparison of two event time intervals, [t 1 start , t 1 end ] and [t 2 start , t 2 end ], can be decomposed into four comparisons t 1 start vs. t 2 start , t 1 start vs. t 2 end , t 1 end vs. t 2 start , and t 1 end vs. t 2 end , without loss of generality.",
"[!",
"\"#$%# & , !",
"()* & ] [!",
"\"#$%# + , !",
"()* + ] time In addition to same expressivity, interval splitting can provide even more information when the relation between two events is vague.",
"In the conventional setting, imagine that the annotators find that the relation between two events can be either before or before and overlap.",
"Then the resulting annotation will have to be vague, although the annotators actually agree on the relation between t 1 start and t 2 start .",
"Using interval splitting, however, such information can be preserved.",
"An obvious downside of interval splitting is the increased number of annotations needed (4 point comparisons vs. 1 interval comparison).",
"In practice, however, it is usually much fewer than 4 comparisons.",
"For example, when we see t 1 end < t 2 start (as in Fig.",
"3) , the other three can be skipped because they can all be inferred.",
"Moreover, although the number of annotations is increased, the work load for human annotators may still be the same, because even in the conventional scheme, they still need to think of the relations between start-and end-points before they can make a decision.",
"Ambiguity of End-Points During our pilot annotation, the annotation quality dropped significantly when the annotators needed to reason about relations involving end-points of events.",
"Table 2 shows four metrics of task difficulty when only t 1 start vs. t 2 start or t 1 end vs. t 2 end are annotated.",
"Non-anchorable events were removed for both jobs.",
"The first two metrics, qualifying pass rate and survival rate are related to the two quality control protocols (see Sec.",
"4.1 for details).",
"We can see that when annotating the relations between end-points, only one out of ten crowdsourcers (11%) could successfully pass our qualifying test; and even if they had passed it, half of them (56%) would have been kicked out in the middle of the task.",
"The third line is the overall accuracy on gold set from all crowdsourcers (excluding those who did not pass the qualifying test), which drops from 67% to 37% when annotating end-end relations.",
"The last line is the average response time per annotation and we can see that it takes much longer to label an end-end TempRel (52s) than a start-start TempRel (33s).",
"This important discovery indicates that the TempRels between end-points is probably governed by a different linguistic phenomenon.",
"Table 2 : Annotations involving the end-points of events are found to be much harder than only comparing the start-points.",
"We hypothesize that the difficulty is a mixture of how durative events are expressed (by authors) and perceived (by readers) in natural language.",
"In cognitive psychology, Coll-Florit and Gennari (2011) discovered that human readers take longer to perceive durative events than punctual events, e.g., owe 50 bucks vs. lost 50 bucks.",
"From the writer's standpoint, durations are usually fuzzy (Schockaert and De Cock, 2008) , or assumed to be a prior knowledge of readers (e.g., college takes 4 years and watching an NBA game takes a few hours), and thus not always written explicitly.",
"Given all these reasons, we ignore the comparison of end-points in this work, although event duration is indeed, another important task.",
"Annotation Scheme Design To summarize, with the proposed multi-axis modeling (Sec.",
"2) and interval splitting (Sec.",
"3), our annotation scheme is two-step.",
"First, we mark every event candidate as being temporally Anchorable or not (based on the time axis we are working on).",
"Second, we adopt the dense annotation scheme to label TempRels only between Anchorable events.",
"Note that we only work on verb events in this paper, so non-verb event candidates are also deleted in a preprocessing step.",
"We design crowdsourcing tasks for both steps and as we show later, high crowdsourcing quality was achieved on both tasks.",
"In this section, we will discuss some practical issues.",
"Quality Control for Crowdsourcing We take advantage of the quality control feature in CrowdFlower in our crowdsourcing jobs.",
"For any job, a set of examples are annotated by experts beforehand, which is considered gold and will serve two purposes.",
"(i) Qualifying test: Any crowdsourcer who wants to work on this job has to pass with 70% accuracy on 10 questions randomly selected from the gold set.",
"(ii) Surviving test: During the annotation process, questions from the gold set will be randomly given to crowdsourcers without notice, and one has to maintain 70% accuracy on the gold set till the end of the annotation; otherwise, he or she will be forbidden from working on this job anymore and all his/her annotations will be discarded.",
"At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision.",
"Vague Relations How to handle vague relations is another issue in temporal annotation.",
"In non-dense schemes, annotators usually skip the annotation of a vague pair.",
"In dense schemes, a majority agreement rule is applied as a postprocessing step to back off a decision to vague when annotators cannot pass a majority vote , which reminds us that annotators often label a vague relation as non-vague due to lack of thinking.",
"We decide to proactively reduce the possibility of such situations.",
"As mentioned earlier, our label set for t 1 start vs. t 2 start is before, after, equal and vague.",
"We ask two questions: Q1=Is it possible that t 1 start is before t 2 start ?",
"Q2=Is it possible that t 2 start is before t 1 start ?",
"Let the an-swers be A1 and A2.",
"Then we have a oneto-one mapping as follows: A1=A2=yes7 !vague, A1=A2=no7 !equal, A1=yes, A2=no7 !before, and A1=no, A2=yes7 !after.",
"An advantage is that one will be prompted to think about all possibilities, thus reducing the chance of overlook.",
"Finally, the annotation interface we used is shown in Appendix C. Corpus Statistics and Quality In this section, we first focus on annotations on the main axis, which is usually the primary storyline and thus has most events.",
"Before launching the crowdsourcing tasks, we checked the IAA between two experts on a subset of TB-Dense (about 100 events and 400 relations).",
"A Cohen's Kappa of .85 was achieved in the first step: anchorability annotation.",
"Only those events that both experts labeled Anchorable were kept before they moved onto the second step: relation annotation, for which the Cohen's Kappa was .90 for Q1 and .87 for Q2.",
"Table 3 furthermore shows the distribution, Cohen's Kappa, and F 1 of each label.",
"We can see the Kappa and F 1 of vague (=.75, F 1 =.81) are generally lower than those of the other labels, confirming that temporal vagueness is a more difficult semantic phenomenon.",
"Nevertheless, the overall IAA shown in Table 3 is a significant improvement compared to existing datasets.",
"With the improved IAA confirmed by experts, we sequentially launched the two-step crowdsourcing tasks through CrowdFlower on top of the same 36 documents of TB-Dense.",
"To evaluate how well the crowdsourcers performed on our task, we calculate two quality metrics: accuracy on the gold set and the Worker Agreement with Aggregate (WAWA).",
"WAWA indicates the average number of crowdsourcers' responses agreed with the aggregate answer (we used majority aggregation for each question).",
"For example, if N individual responses were obtained in total, and n of them were correct when compared to the aggregate answer, then WAWA is simply n/N .",
"In the first step, crowdsourcers labeled 28% of the events as Non-Anchorable to the main axis, with an accuracy on the gold of .86 and a WAWA of .79.",
"With Non-Anchorable events filtered, the relation annotation step was launched as another crowdsourcing task.",
"The label distribution is b=.",
"50, a=.28, e=.03, and v=.19 (consistent with Table 3 ).",
"In Table 4 , we show the annotation quality of this step using accuracy on the gold set and WAWA.",
"We can see that the crowdsourcers achieved a very good performance on the gold set, indicating that they are consistent with the authors who created the gold set; these crowdsourcers also achieved a high-level agreement under the WAWA metric, indicating that they are consistent among themselves.",
"These two metrics indicate that the annotation task is now well-defined and easy to understand even by non-experts.",
"Table 4 : Quality analysis of the relation annotation step of MATRES.",
"\"Q1\" and \"Q2\" refer to the two questions crowdsourcers were asked (see Sec.",
"4.2 for details).",
"Line 1 measures the level of consistency between crowdsourcers and the authors and line 2 measures the level of consistency among the crowdsourcers themselves.",
"We continued to annotate INTENTION and OPINION which create orthogonal branches on the main axis.",
"In the first step, crowdsourcers achieved an accuracy on gold of .82 and a WAWA of .89.",
"Since only 16% of the events are in this category and these axes are usually very short (e.g., allocate funds to build a museum.",
"), the annotation task is relatively small and two experts took the second step and achieved an agreement of .86 (F 1 ).",
"We name our new dataset MATRES for Multi-Axis Temporal RElations for Start-points.",
"Each individual judgement cost us $0.01 and MATRES in total cost about $400 for 36 documents.",
"Comparison to TB-Dense To get another checkpoint of the quality of the new dataset, we compare with the annotations of TB-Dense.",
"TB-Dense has 1.1K verb events, between which 3.4K event-event (EE) relations are annotated.",
"In the new dataset, 72% of the events (0.8K) are anchored onto the main axis, resulting in 1.6K EE relations, and 16% (0.2K) are anchored onto orthogonal axes, resulting in 0.2K EE relations.",
"The following comparison is based on the 1.8K EE relations in common.",
"Moreover, since TB-Dense annotations are for intervals instead of start-points only, we converted TB-Dense's interval relations to start-point relations (e.g., if A includes B, then t A start is before t B start ).",
"The confusion matrix is shown in Table 5 .",
"A few remarks about how to understand it: First, when TB-Dense labels before or after, MATRES also has a high-probability of having the same label (b=455/513=.89, a=309/438=.71); when MATRES labels vague, TB-Dense is also very likely to label vague (v=192/312=.62).",
"This indicates the high agreement level between the two datasets if the interval-or point-based annotation difference is ruled out.",
"Second, many vague relations in TB-Dense are labeled as before, after or equal in MATRES.",
"This is expected because TB-Dense annotates relations between intervals, while MATRES annotates start-points.",
"When durative events are involved, the problem usually becomes more difficult and interval-based annotation is more likely to label vague (see earlier discussions in Sec.",
"3).",
"Example 7 shows three typical cases, where e14:became, e17:backed, e18:rose and e19:extending can be considered durative.",
"If only their start-points are considered, the crowdsourcers were correct in labeling e14 before e15, e16 after e17, and e18 equal to e19, although TB-Dense says vague for all of them.",
"Third, equal seems to be the relation that the two dataset mostly disagree on, which is probably due to crowdsourcers' lack of understanding in time granularity and event coreference.",
"Although equal relations only constitutes a small portion in all relations, it needs further investigation.",
"Baseline System We develop a baseline system for TempRel extraction on MATRES, assuming that all the events and axes are given.",
"The following commonly-Example 7: Typical cases that TB-Dense annotated vague but MATRES annotated before, after, and equal, respectively.",
"At one point , when it (e14:became) clear controllers could not contact the plane, someone (e15:said) a prayer.",
"TB-Dense: vague; MATRES: before The US is bolstering its military presence in the gulf, as President Clinton (e16:discussed) the Iraq crisis with the one ally who has (e17:backed) his threat of force, British prime minister Tony Blair.",
"TB-Dense: vague; MATRES: after Average hourly earnings of nonsupervisory employees (e18:rose) to $12.51.",
"The gain left wages 3.8 percent higher than a year earlier, (e19:extending) a trend that has given back to workers some of the earning power they lost to inflation in the last decade.",
"TB-Dense: vague; MATRES: equal used features for each event pair are used: (i) The part-of-speech (POS) tags of each individual event and of its neighboring three words.",
"(ii) The sentence and token distance between the two events.",
"(iii) The appearance of any modal verb between the two event mentions in text (i.e., will, would, can, could, may and might).",
"(iv) The appearance of any temporal connectives between the two event mentions (e.g., before, after and since).",
"(v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998) .",
"(vi) Whether the input event mentions have a common derivational form derived from WordNet.",
"(vii) The head words of the preposition phrases that cover each event, respectively.",
"And (viii) event properties such as Aspect, Modality, and Polarity that come with the TimeBank dataset and are commonly used as features.",
"The proposed baseline system uses the averaged perceptron algorithm to classify the relation between each event pair into one of the four relation types.",
"We adopted the same train/dev/test split of TB-Dense, where there are 22 documents in train, 5 in dev, and 9 in test.",
"Parameters were tuned on the train-set to maximize its F 1 on the dev-set, after which the classifier was retrained on the union of train and dev.",
"A detailed analysis of the baseline system is provided in Table 6 .",
"The performance on equal and vague is lower than on before and after, probably due to shortage in these labels in the training data and the inherent difficulty in event coreference and temporal vagueness.",
"We can see, though, that the overall performance on MATRES is much better than those in the literature for TempRel extraction, which used to be in the low 50's Ning et al., 2017 ).",
"The same system was also retrained and tested on the original annotations of TB-Dense (Line \"Original\"), which confirms the significant improvement if the proposed annotation scheme is used.",
"Note that we do not mean to say that the proposed baseline system itself is better than other existing algorithms, but rather that the proposed annotation scheme and the resulting dataset lead to better defined machine learning tasks.",
"In the future, more data can be collected and used with advanced techniques such as ILP (Do et al., 2012) , structured learning (Ning et al., 2017) or multi-sieve .",
"Table 6 : Performance of the proposed baseline system on MATRES.",
"Line \"Original\" is the same system retrained on the original TB-Dense and tested on the same subset of event pairs.",
"Due to the limited number of equal examples, the system did not make any equal predictions on the testset.",
"Conclusion This paper proposes a new scheme for TempRel annotation between events, simplifying the task by focusing on a single time axis at a time.",
"We have also identified that end-points of events is a major source of confusion during annotation due to reasons beyond the scope of TempRel annotation, and proposed to focus on start-points only and handle the end-points issue in further investigation (e.g., in event duration annotation tasks).",
"Pilot study by expert annotators shows significant IAA improvements compared to literature values, indicating a better task definition under the proposed scheme.",
"This further enables the usage of crowdsourcing to collect a new dataset, MATRES, at a lower time cost.",
"Analysis shows that MATRES, albeit crowdsourced, has achieved a reasonably good agreement level, as confirmed by its performance on the gold set (agreement with the authors), the WAWA metric (agreement with the crowdsourcers themselves), and consistency with TB-Dense (agreement with an existing dataset).",
"Given the fact that existing schemes suffer from low IAAs and lack of data, we hope that the findings in this work would provide a good start towards understanding more sophisticated semantic phenomena in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.3.1",
"2.3.2",
"2.3.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Temporal Structure of Events",
"Motivation",
"Multi-Axis Modeling",
"Comparisons with Existing Work",
"Axis Projection",
"Introduction of the Orthogonal Axes",
"Differences from Factuality",
"Interval Splitting",
"Ambiguity of End-Points",
"Annotation Scheme Design",
"Quality Control for Crowdsourcing",
"Vague Relations",
"Corpus Statistics and Quality",
"Comparison to TB-Dense",
"Baseline System",
"Conclusion"
]
} | GEM-SciDuet-train-123#paper-1336#slide-13 | 3 an interesting observation ambiguity in end points | Given two time intervals:
Metric Pilot Task 1 Pilot Task 2 Interpretation
Comparing the end-points is
significantly harder than comparing
Avg. response time 33 sec 52 sec Task 2 is also significantly slower.
How durative events are expressed (by authors) and perceived (by readers):
Readers usually take longer to perceive durative events than punctual
events, e.g., restore order vs. try to restore order.
Writers usually assume that readers have a prior knowledge of durations
(e.g., college takes 4 years and watching an NBA game takes a few hours)
We only annotate start-points because duration annotation
should be a different task and follow special guidelines. | Given two time intervals:
Metric Pilot Task 1 Pilot Task 2 Interpretation
Comparing the end-points is
significantly harder than comparing
Avg. response time 33 sec 52 sec Task 2 is also significantly slower.
How durative events are expressed (by authors) and perceived (by readers):
Readers usually take longer to perceive durative events than punctual
events, e.g., restore order vs. try to restore order.
Writers usually assume that readers have a prior knowledge of durations
(e.g., college takes 4 years and watching an NBA game takes a few hours)
We only annotate start-points because duration annotation
should be a different task and follow special guidelines. | [] |
GEM-SciDuet-train-123#paper-1336#slide-14 | 1336 | A Multi-Axis Annotation Scheme for Event Temporal Relations | Existing temporal relation (TempRel) annotation schemes often have low interannotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition. This paper proposes a new multi-axis modeling to better capture the temporal structure of events. In addition, we identify that event end-points are a major source of confusion in annotation, so we also propose to annotate TempRels based on start-points only. A pilot expert annotation effort using the proposed scheme shows significant improvement in IAA from the conventional 60's to 80's (Cohen's Kappa). This better-defined annotation scheme further enables the use of crowdsourcing to alleviate the labor intensity for each annotator. We hope that this work can foster more interesting studies towards event understanding. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259
],
"paper_content_text": [
"Introduction Temporal relation (TempRel) extraction is an important task for event understanding, and it has drawn much attention in the natural language processing (NLP) community recently (UzZaman et al., 2013; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015 Bethard et al., , 2016 Bethard et al., , 2017 Leeuwenberg and Moens, 2017; Ning et al., 2017 Ning et al., , 2018a .",
"Initiated by TimeBank (TB) (Pustejovsky et al., 2003b) , a number of TempRel datasets have been collected, including but not limited to the verbclause augmentation to TB (Bethard et al., 2007) , TempEval1-3 (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , TimeBank-Dense (TB-Dense) , EventTimeCorpus (Reimers et al., 2016) , and datasets with both temporal and other types of relations (e.g., coreference and causality) such as CaTeRs (Mostafazadeh et al., 2016) and RED (O'Gorman et al., 2016) .",
"These datasets were annotated by experts, but most still suffered from low inter-annotator agreements (IAA).",
"For instance, the IAAs of TB-Dense, RED and THYME-TimeML (Styler IV et al., 2014) were only below or near 60% (given that events are already annotated).",
"Since a low IAA usually indicates that the task is difficult even for humans (see Examples 1-3), the community has been looking into ways to simplify the task, by reducing the label set, and by breaking up the overall, complex task into subtasks (e.g., getting agreement on which event pairs should have a relation, and then what that relation should be) (Mostafazadeh et al., 2016; O'Gorman et al., 2016) .",
"In contrast to other existing datasets, Bethard et al.",
"(2007) achieved an agreement as high as 90%, but the scope of its annotation was narrowed down to a very special verb-clause structure.",
"(e1, e2), (e3, e4), and (e5, e6): TempRels that are difficult even for humans.",
"Note that only relevant events are highlighted here.",
"Example 1: Serbian police tried to eliminate the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Example 2: Service industries (e3:showed) solid job gains, as did manufacturers, two areas expected to be hardest (e4:hit) when the effects of the Asian crisis hit the American economy.",
"Example 3: We will act again if we have evidence he is (e5:rebuilding) his weapons of mass destruction capabilities, senior officials say.",
"In a bit of television diplomacy, Iraq's deputy foreign minister (e6:responded) from Baghdad in less than one hour, saying that .",
".",
".",
"This paper proposes a new approach to handling these issues in TempRel annotation.",
"First, we introduce multi-axis modeling to represent the temporal structure of events, based on which we anchor events to different semantic axes; only events from the same axis will then be temporally compared (Sec.",
"2).",
"As explained later, those event pairs in Examples 1-3 are difficult because they represent different semantic phenomena and belong to different axes.",
"Second, while we represent an event pair using two time intervals (say, [t 1 start , t 1 end ] and [t 2 start , t 2 end ]), we suggest that comparisons involving end-points (e.g., t 1 end vs. t 2 end ) are typically more difficult than comparing start-points (i.e., t 1 start vs. t 2 start ); we attribute this to the ambiguity of expressing and perceiving durations of events (Coll-Florit and Gennari, 2011) .",
"We believe that this is an important consideration, and we propose in Sec.",
"3 that TempRel annotation should focus on start-points.",
"Using the proposed annotation scheme, a pilot study done by experts achieved a high IAA of .84 (Cohen's Kappa) on a subset of TB-Dense, in contrast to the conventional 60's.",
"In addition to the low IAA issue, TempRel annotation is also known to be labor intensive.",
"Our third contribution is that we facilitate, for the first time, the use of crowdsourcing to collect a new, high quality (under multiple metrics explained later) TempRel dataset.",
"We explain how the crowdsourcing quality was controlled and how vague relations were handled in Sec.",
"4, and present some statistics and the quality of the new dataset in Sec.",
"5.",
"A baseline system is also shown to achieve much better performance on the new dataset, when compared with system performance in the literature (Sec.",
"6).",
"The paper's results are very encouraging and hopefully, this work would significantly benefit research in this area.",
"Temporal Structure of Events Given a set of events, one important question in designing the TempRel annotation task is: which pairs of events should have a relation?",
"The answer to it depends on the modeling of the overall temporal structure of events.",
"Motivation TimeBank (Pustejovsky et al., 2003b) laid the foundation for many later TempRel corpora, e.g., (Bethard et al., 2007; UzZaman et al., 2013; Cas-sidy et al., 2014) .",
"2 In TimeBank, the annotators were allowed to label TempRels between any pairs of events.",
"This setup models the overall structure of events using a general graph, which made annotators inadvertently overlook some pairs, resulting in low IAAs and many false negatives.",
"Example 4: Dense Annotation Scheme.",
"Serbian police (e7:tried) to (e8:eliminate) the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Given 4 NON-GENERIC events above, the dense scheme presents 6 pairs to annotators one by one: (e7, e8), (e7, e1), (e7, e2), (e8, e1), (e8, e2), and (e1, e2).",
"Apparently, not all pairs are well-defined, e.g., (e8, e2) and (e1, e2), but annotators are forced to label all of them.",
"To address this issue, proposed a dense annotation scheme, TB-Dense, which annotates all event pairs within a sliding, two-sentence window (see Example 4).",
"It requires all TempRels between GENERIC 3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NON-GENERIC and the other one for the GENERIC.",
"However, as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts.",
"It is reasonable to argue that the attempt to e1:restore order happened before the conflict where 51 people were e2:killed; or, 51 people had been killed but order had not been restored yet, so e1:restore is after e2:killed.",
"Similarly, in Example 2, service industries and manufacturers were originally expected to be hardest e4:hit but actually e3:showed gains, so e4:hit is before e3:showed; however, one can also argue that the two areas had showed gains but had not been hit, so e4:hit is after e3:showed.",
"Again, e5:rebuilding is a hypothetical event: \"we will act if rebuilding is true\".",
"Readers do not know for sure if \"he is already rebuilding weapons but we have no evidence\", or \"he will be building weapons in the future\", so annotators may disagree on the relation between e5:rebuilding and e6:responded.",
"Despite, importantly, minimizing missing annota-2 EventTimeCorpus (Reimers et al., 2016) is based on TimeBank, but aims at anchoring events onto explicit time expressions in each document rather than annotating TempRels between events, which can be a good complementary to other TempRel datasets.",
"3 For example, lions eat meat is GENERIC.",
"tions, the current dense scheme forces annotators to label many such ill-defined pairs, resulting in low IAA.",
"Multi-Axis Modeling Arguably, an ideal annotator may figure out the above ambiguity by him/herself and mark them as vague, but it is not a feasible requirement for all annotators to stay clear-headed for hours; let alone crowdsourcers.",
"What makes things worse is that, after annotators spend a long time figuring out these difficult cases, whether they disagree with each other or agree on the vagueness, the final decisions for such cases will still be vague.",
"As another way to handle this dilemma, TB-Dense resorted to a 80% confidence rule: annotators were allowed to choose a label if one is 80% sure that it was the writer's intent.",
"However, as pointed out by TB-Dense, annotators are likely to have rather different understandings of 80% confidence and it will still end up with disagreements.",
"In contrast to these annotation difficulties, humans can easily grasp the meaning of news articles, implying a potential gap between the difficulty of the annotation task and the one of understanding the actual meaning of the text.",
"In Examples 1-3, the writers did not intend to explain the TempRels between those pairs, and the original annotators of TimeBank 4 did not label relations between those pairs either, which indicates that both writers and readers did not think the TempRels between these pairs were crucial.",
"Instead, what is crucial in these examples is that \"Serbian police tried to restore order but killed 51 people\", that \"two areas were expected to be hit but showed gains\", and that \"if he rebuilds weapons then we will act.\"",
"To \"restore order\", to be \"hardest hit\", and \"if he was rebuilding\" were only the intention of police, the opinion of economists, and the condition to act, respectively, and whether or not they actually happen is not the focus of those writers.",
"This discussion suggests that a single axis is too restrictive to represent the complex structure of NON-GENERIC events.",
"Instead, we need a modeling which is more restrictive than a general graph so that annotators can focus on relation annotation (rather than looking for pairs first), but also more flexible than a single axis so that ill-defined 4 Recall that they were given the entire article and only salient relations would be annotated.",
"relations are not forcibly annotated.",
"Specifically, we need axes for intentions, opinions, hypotheses, etc.",
"in addition to the main axis of an article.",
"We thus argue for multi-axis modeling, as defined in Table 1 .",
"Following the proposed modeling, Examples 1-3 can be represented as in Fig.",
"1 .",
"This modeling aims at capturing what the author has explicitly expressed and it only asks annotators to look at comparable pairs, rather than forcing them to make decisions on often vaguely defined pairs.",
"In practice, we annotate one axis at a time: we first classify if an event is anchorable onto a given axis (this is also called the anchorability annotation step); then we annotate every pair of anchorable events (i.e., the relation annotation step); finally, we can move to another axis and repeat the two steps above.",
"Note that ruling out cross-axis relations is only a strategy we adopt in this paper to separate well-defined relations from ill-defined relations.",
"We do not claim that cross-axis relations are unimportant; instead, as shown in Fig.",
"2 , we think that cross-axis relations are a different semantic phenomenon that requires additional investigation.",
"Comparisons with Existing Work There have been other proposals of temporal structure modelings (Bramsen et al., 2006; Bethard et al., 2012) , but in general, the semantic phenomena handled in our work are very different and complementary to them.",
"(Bramsen et al., 2006) introduces \"temporal segments\" (a fragment of text that does not exhibit abrupt changes) in the medical domain.",
"Similarly, their temporal segments can also be considered as a special temporal structure modeling.",
"But a key difference is that (Bramsen et al., 2006) only annotates inter-segment relations, ignoring intra-segment ones.",
"Since those segments are usually large chunks of text, the semantics handled in (Bramsen et al., 2006) is in a very coarse granularity (as pointed out by (Bramsen et al., 2006) ) and is thus different from ours.",
"(Bethard et al., 2012) proposes a tree structure for children's stories, which \"typically have simpler temporal structures\", as they pointed out.",
"Moreover, in their annotation, an event can only be linked to a single nearby event, even if multiple nearby events may exist, whereas we do not have such restrictions.",
"In addition, some of the semantic phenomena in Table 1 have been discussed in existing work.",
"Here we compare with them for a better positioning of the proposed scheme.",
"Axis Projection TB-Dense handled the incomparability between main-axis events and HYPOTHESIS/NEGATION by treating an event as having occurred if the event is HYPOTHESIS/NEGATION.",
"5 In our multiaxis modeling, the strategy adopted by TB-Dense falls into a more general approach, \"axis projection\".",
"That is, projecting events across different axes to handle the incomparability between any two axes (not limited to HYPOTHE-SIS/NEGATION).",
"Axis projection works well for certain event pairs like Asian crisis and e4:hardest hit in Example 2: as in Fig.",
"1 , Asian crisis is before expected, which is again before e4:hardest hit, so Asian crisis is before e4:hardest hit.",
"Generally, however, since there is no direct evidence that can guide the projection, annotators may have different projections (imagine projecting e5:rebuilding onto the main axis: is it in the past or in the future?).",
"As a result, axis projec-tion requires many specially designed guidelines or strong external knowledge.",
"Annotators have to rigidly follow the sometimes counter-intuitive guidelines or \"guess\" a label instead of looking for evidence in the text.",
"When strong external knowledge is involved in axis projection, it becomes a reasoning process and the resulting relations are a different type.",
"For example, a reader may reason that in Example 3, it is well-known that they did \"act again\", implying his e5:rebuilding had happened and is before e6:responded.",
"Another example is in Fig.",
"2 .",
"It is obvious that relations based on these projections are not the same with and more challenging than those same-axis relations, so in the current stage, we should focus on same-axis relations only.",
"tended the conference, the projection of submit a paper onto the main axis is clearly before attended.",
"However, this projection requires strong external knowledge that a paper should be submitted before attending a conference.",
"Again, this projection is only a guess based on our external knowledge and it is still open whether the paper is submitted or not.",
"Introduction of the Orthogonal Axes Another prominent difference to earlier work is the introduction of orthogonal axes, which has not been used in any existing work as we know.",
"A special property is that the intersection event of two axes can be compared to events from both, which can sometimes bridge events, e.g., in Fig.",
"1 , Asian crisis is seemingly before hardest hit due to their connections to expected.",
"Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well.",
"However, the \"hardest hit\" in \"Asian crisis before hardest hit\" is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true.",
"Nevertheless, OPINIONS are not always true and INTENTIONS are not always fulfilled.",
"In Example 5, e9:sponsoring and e10:resolve are the opinions of the West and the speaker, respectively; whether or not they are true depends on the au-thors' implications or the readers' understandings, which is often beyond the scope of TempRel annotation.",
"6 Example 6 demonstrates a similar situation for INTENTIONS: when reading the sentence of e11:report, people are inclined to believe that it is fulfilled.",
"But if we read the sentence of e12:report, we have reason to believe that it is not.",
"When it comes to e13:tell, it is unclear if everyone told the truth.",
"The existence of such examples indicates that orthogonal axes are a better modeling for INTENTIONS and OPINIONS.",
"Example 5: Opinion events may not always be true.",
"He is ostracized by the West for (e9:sponsoring) terrorism.",
"We need to (e10:resolve) the deep-seated causes that have resulted in these problems.",
"Example 6: Intentions may not always be fulfilled.",
"A passerby called the police to (e11:report) the body.",
"A passerby called the police to (e12:report) the body.",
"Unfortunately, the line was busy.",
"I asked everyone to (e13:tell) the truth.",
"Differences from Factuality Event modality have been discussed in many existing event annotation schemes, e.g., Event Nugget (Mitamura et al., 2015) , Rich ERE , and RED.",
"Generally, an event is classified as Actual or Non-Actual, a.k.a.",
"factuality (Saurí and Pustejovsky, 2009; Lee et al., 2015) .",
"The main-axis events defined in this paper seem to be very similar to Actual events, but with several important differences: First, future events are Non-Actual because they indeed have not happened, but they may be on the main axis.",
"Second, events that are not on the main axis can also be Actual events, e.g., intentions that are fulfilled, or opinions that are true.",
"Third, as demonstrated by Examples 5-6, identifying anchorability as defined in Table 1 is relatively easy, but judging if an event actually happened is often a high-level understanding task that requires an understanding of the entire document or external knowledge.",
"Interested readers are referred to Appendix B for a detailed analysis of the difference between Anchorable (onto the main axis) and Actual on a subset of RED.",
"Interval Splitting All existing annotation schemes adopt the interval representation of events (Allen, 1984) and there are 13 relations between two intervals (for readers who are not familiar with it, please see Fig.",
"4 in the appendix).",
"To reduce the burden of annotators, existing schemes often resort to a reduced set of the 13 relations.",
"For instance, Verhagen et al.",
"(2007) Let [t 1 start , t 1 end ] and [t 2 start , t 2 end ] be the time intervals of two events (with the implicit assumption that t start t end ).",
"Instead of reducing the relations between two intervals, we try to explicitly compare the time points (see Fig.",
"3 ).",
"In this way, the label set is simply before, after and equal, 7 while the expressivity remains the same.",
"This interval splitting technique has also been used in (Raghavan et al., 2012) .",
"Figure 3 : The comparison of two event time intervals, [t 1 start , t 1 end ] and [t 2 start , t 2 end ], can be decomposed into four comparisons t 1 start vs. t 2 start , t 1 start vs. t 2 end , t 1 end vs. t 2 start , and t 1 end vs. t 2 end , without loss of generality.",
"[!",
"\"#$%# & , !",
"()* & ] [!",
"\"#$%# + , !",
"()* + ] time In addition to same expressivity, interval splitting can provide even more information when the relation between two events is vague.",
"In the conventional setting, imagine that the annotators find that the relation between two events can be either before or before and overlap.",
"Then the resulting annotation will have to be vague, although the annotators actually agree on the relation between t 1 start and t 2 start .",
"Using interval splitting, however, such information can be preserved.",
"An obvious downside of interval splitting is the increased number of annotations needed (4 point comparisons vs. 1 interval comparison).",
"In practice, however, it is usually much fewer than 4 comparisons.",
"For example, when we see t 1 end < t 2 start (as in Fig.",
"3) , the other three can be skipped because they can all be inferred.",
"Moreover, although the number of annotations is increased, the work load for human annotators may still be the same, because even in the conventional scheme, they still need to think of the relations between start-and end-points before they can make a decision.",
"Ambiguity of End-Points During our pilot annotation, the annotation quality dropped significantly when the annotators needed to reason about relations involving end-points of events.",
"Table 2 shows four metrics of task difficulty when only t 1 start vs. t 2 start or t 1 end vs. t 2 end are annotated.",
"Non-anchorable events were removed for both jobs.",
"The first two metrics, qualifying pass rate and survival rate are related to the two quality control protocols (see Sec.",
"4.1 for details).",
"We can see that when annotating the relations between end-points, only one out of ten crowdsourcers (11%) could successfully pass our qualifying test; and even if they had passed it, half of them (56%) would have been kicked out in the middle of the task.",
"The third line is the overall accuracy on gold set from all crowdsourcers (excluding those who did not pass the qualifying test), which drops from 67% to 37% when annotating end-end relations.",
"The last line is the average response time per annotation and we can see that it takes much longer to label an end-end TempRel (52s) than a start-start TempRel (33s).",
"This important discovery indicates that the TempRels between end-points is probably governed by a different linguistic phenomenon.",
"Table 2 : Annotations involving the end-points of events are found to be much harder than only comparing the start-points.",
"We hypothesize that the difficulty is a mixture of how durative events are expressed (by authors) and perceived (by readers) in natural language.",
"In cognitive psychology, Coll-Florit and Gennari (2011) discovered that human readers take longer to perceive durative events than punctual events, e.g., owe 50 bucks vs. lost 50 bucks.",
"From the writer's standpoint, durations are usually fuzzy (Schockaert and De Cock, 2008) , or assumed to be a prior knowledge of readers (e.g., college takes 4 years and watching an NBA game takes a few hours), and thus not always written explicitly.",
"Given all these reasons, we ignore the comparison of end-points in this work, although event duration is indeed, another important task.",
"Annotation Scheme Design To summarize, with the proposed multi-axis modeling (Sec.",
"2) and interval splitting (Sec.",
"3), our annotation scheme is two-step.",
"First, we mark every event candidate as being temporally Anchorable or not (based on the time axis we are working on).",
"Second, we adopt the dense annotation scheme to label TempRels only between Anchorable events.",
"Note that we only work on verb events in this paper, so non-verb event candidates are also deleted in a preprocessing step.",
"We design crowdsourcing tasks for both steps and as we show later, high crowdsourcing quality was achieved on both tasks.",
"In this section, we will discuss some practical issues.",
"Quality Control for Crowdsourcing We take advantage of the quality control feature in CrowdFlower in our crowdsourcing jobs.",
"For any job, a set of examples are annotated by experts beforehand, which is considered gold and will serve two purposes.",
"(i) Qualifying test: Any crowdsourcer who wants to work on this job has to pass with 70% accuracy on 10 questions randomly selected from the gold set.",
"(ii) Surviving test: During the annotation process, questions from the gold set will be randomly given to crowdsourcers without notice, and one has to maintain 70% accuracy on the gold set till the end of the annotation; otherwise, he or she will be forbidden from working on this job anymore and all his/her annotations will be discarded.",
"At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision.",
"Vague Relations How to handle vague relations is another issue in temporal annotation.",
"In non-dense schemes, annotators usually skip the annotation of a vague pair.",
"In dense schemes, a majority agreement rule is applied as a postprocessing step to back off a decision to vague when annotators cannot pass a majority vote , which reminds us that annotators often label a vague relation as non-vague due to lack of thinking.",
"We decide to proactively reduce the possibility of such situations.",
"As mentioned earlier, our label set for t 1 start vs. t 2 start is before, after, equal and vague.",
"We ask two questions: Q1=Is it possible that t 1 start is before t 2 start ?",
"Q2=Is it possible that t 2 start is before t 1 start ?",
"Let the an-swers be A1 and A2.",
"Then we have a oneto-one mapping as follows: A1=A2=yes7 !vague, A1=A2=no7 !equal, A1=yes, A2=no7 !before, and A1=no, A2=yes7 !after.",
"An advantage is that one will be prompted to think about all possibilities, thus reducing the chance of overlook.",
"Finally, the annotation interface we used is shown in Appendix C. Corpus Statistics and Quality In this section, we first focus on annotations on the main axis, which is usually the primary storyline and thus has most events.",
"Before launching the crowdsourcing tasks, we checked the IAA between two experts on a subset of TB-Dense (about 100 events and 400 relations).",
"A Cohen's Kappa of .85 was achieved in the first step: anchorability annotation.",
"Only those events that both experts labeled Anchorable were kept before they moved onto the second step: relation annotation, for which the Cohen's Kappa was .90 for Q1 and .87 for Q2.",
"Table 3 furthermore shows the distribution, Cohen's Kappa, and F 1 of each label.",
"We can see the Kappa and F 1 of vague (=.75, F 1 =.81) are generally lower than those of the other labels, confirming that temporal vagueness is a more difficult semantic phenomenon.",
"Nevertheless, the overall IAA shown in Table 3 is a significant improvement compared to existing datasets.",
"With the improved IAA confirmed by experts, we sequentially launched the two-step crowdsourcing tasks through CrowdFlower on top of the same 36 documents of TB-Dense.",
"To evaluate how well the crowdsourcers performed on our task, we calculate two quality metrics: accuracy on the gold set and the Worker Agreement with Aggregate (WAWA).",
"WAWA indicates the average number of crowdsourcers' responses agreed with the aggregate answer (we used majority aggregation for each question).",
"For example, if N individual responses were obtained in total, and n of them were correct when compared to the aggregate answer, then WAWA is simply n/N .",
"In the first step, crowdsourcers labeled 28% of the events as Non-Anchorable to the main axis, with an accuracy on the gold of .86 and a WAWA of .79.",
"With Non-Anchorable events filtered, the relation annotation step was launched as another crowdsourcing task.",
"The label distribution is b=.",
"50, a=.28, e=.03, and v=.19 (consistent with Table 3 ).",
"In Table 4 , we show the annotation quality of this step using accuracy on the gold set and WAWA.",
"We can see that the crowdsourcers achieved a very good performance on the gold set, indicating that they are consistent with the authors who created the gold set; these crowdsourcers also achieved a high-level agreement under the WAWA metric, indicating that they are consistent among themselves.",
"These two metrics indicate that the annotation task is now well-defined and easy to understand even by non-experts.",
"Table 4 : Quality analysis of the relation annotation step of MATRES.",
"\"Q1\" and \"Q2\" refer to the two questions crowdsourcers were asked (see Sec.",
"4.2 for details).",
"Line 1 measures the level of consistency between crowdsourcers and the authors and line 2 measures the level of consistency among the crowdsourcers themselves.",
"We continued to annotate INTENTION and OPINION which create orthogonal branches on the main axis.",
"In the first step, crowdsourcers achieved an accuracy on gold of .82 and a WAWA of .89.",
"Since only 16% of the events are in this category and these axes are usually very short (e.g., allocate funds to build a museum.",
"), the annotation task is relatively small and two experts took the second step and achieved an agreement of .86 (F 1 ).",
"We name our new dataset MATRES for Multi-Axis Temporal RElations for Start-points.",
"Each individual judgement cost us $0.01 and MATRES in total cost about $400 for 36 documents.",
"Comparison to TB-Dense To get another checkpoint of the quality of the new dataset, we compare with the annotations of TB-Dense.",
"TB-Dense has 1.1K verb events, between which 3.4K event-event (EE) relations are annotated.",
"In the new dataset, 72% of the events (0.8K) are anchored onto the main axis, resulting in 1.6K EE relations, and 16% (0.2K) are anchored onto orthogonal axes, resulting in 0.2K EE relations.",
"The following comparison is based on the 1.8K EE relations in common.",
"Moreover, since TB-Dense annotations are for intervals instead of start-points only, we converted TB-Dense's interval relations to start-point relations (e.g., if A includes B, then t A start is before t B start ).",
"The confusion matrix is shown in Table 5 .",
"A few remarks about how to understand it: First, when TB-Dense labels before or after, MATRES also has a high-probability of having the same label (b=455/513=.89, a=309/438=.71); when MATRES labels vague, TB-Dense is also very likely to label vague (v=192/312=.62).",
"This indicates the high agreement level between the two datasets if the interval-or point-based annotation difference is ruled out.",
"Second, many vague relations in TB-Dense are labeled as before, after or equal in MATRES.",
"This is expected because TB-Dense annotates relations between intervals, while MATRES annotates start-points.",
"When durative events are involved, the problem usually becomes more difficult and interval-based annotation is more likely to label vague (see earlier discussions in Sec.",
"3).",
"Example 7 shows three typical cases, where e14:became, e17:backed, e18:rose and e19:extending can be considered durative.",
"If only their start-points are considered, the crowdsourcers were correct in labeling e14 before e15, e16 after e17, and e18 equal to e19, although TB-Dense says vague for all of them.",
"Third, equal seems to be the relation that the two dataset mostly disagree on, which is probably due to crowdsourcers' lack of understanding in time granularity and event coreference.",
"Although equal relations only constitutes a small portion in all relations, it needs further investigation.",
"Baseline System We develop a baseline system for TempRel extraction on MATRES, assuming that all the events and axes are given.",
"The following commonly-Example 7: Typical cases that TB-Dense annotated vague but MATRES annotated before, after, and equal, respectively.",
"At one point , when it (e14:became) clear controllers could not contact the plane, someone (e15:said) a prayer.",
"TB-Dense: vague; MATRES: before The US is bolstering its military presence in the gulf, as President Clinton (e16:discussed) the Iraq crisis with the one ally who has (e17:backed) his threat of force, British prime minister Tony Blair.",
"TB-Dense: vague; MATRES: after Average hourly earnings of nonsupervisory employees (e18:rose) to $12.51.",
"The gain left wages 3.8 percent higher than a year earlier, (e19:extending) a trend that has given back to workers some of the earning power they lost to inflation in the last decade.",
"TB-Dense: vague; MATRES: equal used features for each event pair are used: (i) The part-of-speech (POS) tags of each individual event and of its neighboring three words.",
"(ii) The sentence and token distance between the two events.",
"(iii) The appearance of any modal verb between the two event mentions in text (i.e., will, would, can, could, may and might).",
"(iv) The appearance of any temporal connectives between the two event mentions (e.g., before, after and since).",
"(v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998) .",
"(vi) Whether the input event mentions have a common derivational form derived from WordNet.",
"(vii) The head words of the preposition phrases that cover each event, respectively.",
"And (viii) event properties such as Aspect, Modality, and Polarity that come with the TimeBank dataset and are commonly used as features.",
"The proposed baseline system uses the averaged perceptron algorithm to classify the relation between each event pair into one of the four relation types.",
"We adopted the same train/dev/test split of TB-Dense, where there are 22 documents in train, 5 in dev, and 9 in test.",
"Parameters were tuned on the train-set to maximize its F 1 on the dev-set, after which the classifier was retrained on the union of train and dev.",
"A detailed analysis of the baseline system is provided in Table 6 .",
"The performance on equal and vague is lower than on before and after, probably due to shortage in these labels in the training data and the inherent difficulty in event coreference and temporal vagueness.",
"We can see, though, that the overall performance on MATRES is much better than those in the literature for TempRel extraction, which used to be in the low 50's Ning et al., 2017 ).",
"The same system was also retrained and tested on the original annotations of TB-Dense (Line \"Original\"), which confirms the significant improvement if the proposed annotation scheme is used.",
"Note that we do not mean to say that the proposed baseline system itself is better than other existing algorithms, but rather that the proposed annotation scheme and the resulting dataset lead to better defined machine learning tasks.",
"In the future, more data can be collected and used with advanced techniques such as ILP (Do et al., 2012) , structured learning (Ning et al., 2017) or multi-sieve .",
"Table 6 : Performance of the proposed baseline system on MATRES.",
"Line \"Original\" is the same system retrained on the original TB-Dense and tested on the same subset of event pairs.",
"Due to the limited number of equal examples, the system did not make any equal predictions on the testset.",
"Conclusion This paper proposes a new scheme for TempRel annotation between events, simplifying the task by focusing on a single time axis at a time.",
"We have also identified that end-points of events is a major source of confusion during annotation due to reasons beyond the scope of TempRel annotation, and proposed to focus on start-points only and handle the end-points issue in further investigation (e.g., in event duration annotation tasks).",
"Pilot study by expert annotators shows significant IAA improvements compared to literature values, indicating a better task definition under the proposed scheme.",
"This further enables the usage of crowdsourcing to collect a new dataset, MATRES, at a lower time cost.",
"Analysis shows that MATRES, albeit crowdsourced, has achieved a reasonably good agreement level, as confirmed by its performance on the gold set (agreement with the authors), the WAWA metric (agreement with the crowdsourcers themselves), and consistency with TB-Dense (agreement with an existing dataset).",
"Given the fact that existing schemes suffer from low IAAs and lack of data, we hope that the findings in this work would provide a good start towards understanding more sophisticated semantic phenomena in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.3.1",
"2.3.2",
"2.3.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Temporal Structure of Events",
"Motivation",
"Multi-Axis Modeling",
"Comparisons with Existing Work",
"Axis Projection",
"Introduction of the Orthogonal Axes",
"Differences from Factuality",
"Interval Splitting",
"Ambiguity of End-Points",
"Annotation Scheme Design",
"Quality Control for Crowdsourcing",
"Vague Relations",
"Corpus Statistics and Quality",
"Comparison to TB-Dense",
"Baseline System",
"Conclusion"
]
} | GEM-SciDuet-train-123#paper-1336#slide-14 | Quality metrics of our new dataset | Step 2: Axis Step 3: TempRel
Expert (~400 random relations)
Remember: Literature expert values are around 60%
For interested readers, please refer to our paper for more analysis regarding each individual label.
Worker Agreement With Aggregate (WAWA): assumes that the
aggregated annotations are gold and then compute the accuracy. | Step 2: Axis Step 3: TempRel
Expert (~400 random relations)
Remember: Literature expert values are around 60%
For interested readers, please refer to our paper for more analysis regarding each individual label.
Worker Agreement With Aggregate (WAWA): assumes that the
aggregated annotations are gold and then compute the accuracy. | [] |
GEM-SciDuet-train-123#paper-1336#slide-15 | 1336 | A Multi-Axis Annotation Scheme for Event Temporal Relations | Existing temporal relation (TempRel) annotation schemes often have low interannotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition. This paper proposes a new multi-axis modeling to better capture the temporal structure of events. In addition, we identify that event end-points are a major source of confusion in annotation, so we also propose to annotate TempRels based on start-points only. A pilot expert annotation effort using the proposed scheme shows significant improvement in IAA from the conventional 60's to 80's (Cohen's Kappa). This better-defined annotation scheme further enables the use of crowdsourcing to alleviate the labor intensity for each annotator. We hope that this work can foster more interesting studies towards event understanding. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259
],
"paper_content_text": [
"Introduction Temporal relation (TempRel) extraction is an important task for event understanding, and it has drawn much attention in the natural language processing (NLP) community recently (UzZaman et al., 2013; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015 Bethard et al., , 2016 Bethard et al., , 2017 Leeuwenberg and Moens, 2017; Ning et al., 2017 Ning et al., , 2018a .",
"Initiated by TimeBank (TB) (Pustejovsky et al., 2003b) , a number of TempRel datasets have been collected, including but not limited to the verbclause augmentation to TB (Bethard et al., 2007) , TempEval1-3 (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , TimeBank-Dense (TB-Dense) , EventTimeCorpus (Reimers et al., 2016) , and datasets with both temporal and other types of relations (e.g., coreference and causality) such as CaTeRs (Mostafazadeh et al., 2016) and RED (O'Gorman et al., 2016) .",
"These datasets were annotated by experts, but most still suffered from low inter-annotator agreements (IAA).",
"For instance, the IAAs of TB-Dense, RED and THYME-TimeML (Styler IV et al., 2014) were only below or near 60% (given that events are already annotated).",
"Since a low IAA usually indicates that the task is difficult even for humans (see Examples 1-3), the community has been looking into ways to simplify the task, by reducing the label set, and by breaking up the overall, complex task into subtasks (e.g., getting agreement on which event pairs should have a relation, and then what that relation should be) (Mostafazadeh et al., 2016; O'Gorman et al., 2016) .",
"In contrast to other existing datasets, Bethard et al.",
"(2007) achieved an agreement as high as 90%, but the scope of its annotation was narrowed down to a very special verb-clause structure.",
"(e1, e2), (e3, e4), and (e5, e6): TempRels that are difficult even for humans.",
"Note that only relevant events are highlighted here.",
"Example 1: Serbian police tried to eliminate the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Example 2: Service industries (e3:showed) solid job gains, as did manufacturers, two areas expected to be hardest (e4:hit) when the effects of the Asian crisis hit the American economy.",
"Example 3: We will act again if we have evidence he is (e5:rebuilding) his weapons of mass destruction capabilities, senior officials say.",
"In a bit of television diplomacy, Iraq's deputy foreign minister (e6:responded) from Baghdad in less than one hour, saying that .",
".",
".",
"This paper proposes a new approach to handling these issues in TempRel annotation.",
"First, we introduce multi-axis modeling to represent the temporal structure of events, based on which we anchor events to different semantic axes; only events from the same axis will then be temporally compared (Sec.",
"2).",
"As explained later, those event pairs in Examples 1-3 are difficult because they represent different semantic phenomena and belong to different axes.",
"Second, while we represent an event pair using two time intervals (say, [t 1 start , t 1 end ] and [t 2 start , t 2 end ]), we suggest that comparisons involving end-points (e.g., t 1 end vs. t 2 end ) are typically more difficult than comparing start-points (i.e., t 1 start vs. t 2 start ); we attribute this to the ambiguity of expressing and perceiving durations of events (Coll-Florit and Gennari, 2011) .",
"We believe that this is an important consideration, and we propose in Sec.",
"3 that TempRel annotation should focus on start-points.",
"Using the proposed annotation scheme, a pilot study done by experts achieved a high IAA of .84 (Cohen's Kappa) on a subset of TB-Dense, in contrast to the conventional 60's.",
"In addition to the low IAA issue, TempRel annotation is also known to be labor intensive.",
"Our third contribution is that we facilitate, for the first time, the use of crowdsourcing to collect a new, high quality (under multiple metrics explained later) TempRel dataset.",
"We explain how the crowdsourcing quality was controlled and how vague relations were handled in Sec.",
"4, and present some statistics and the quality of the new dataset in Sec.",
"5.",
"A baseline system is also shown to achieve much better performance on the new dataset, when compared with system performance in the literature (Sec.",
"6).",
"The paper's results are very encouraging and hopefully, this work would significantly benefit research in this area.",
"Temporal Structure of Events Given a set of events, one important question in designing the TempRel annotation task is: which pairs of events should have a relation?",
"The answer to it depends on the modeling of the overall temporal structure of events.",
"Motivation TimeBank (Pustejovsky et al., 2003b) laid the foundation for many later TempRel corpora, e.g., (Bethard et al., 2007; UzZaman et al., 2013; Cas-sidy et al., 2014) .",
"2 In TimeBank, the annotators were allowed to label TempRels between any pairs of events.",
"This setup models the overall structure of events using a general graph, which made annotators inadvertently overlook some pairs, resulting in low IAAs and many false negatives.",
"Example 4: Dense Annotation Scheme.",
"Serbian police (e7:tried) to (e8:eliminate) the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Given 4 NON-GENERIC events above, the dense scheme presents 6 pairs to annotators one by one: (e7, e8), (e7, e1), (e7, e2), (e8, e1), (e8, e2), and (e1, e2).",
"Apparently, not all pairs are well-defined, e.g., (e8, e2) and (e1, e2), but annotators are forced to label all of them.",
"To address this issue, proposed a dense annotation scheme, TB-Dense, which annotates all event pairs within a sliding, two-sentence window (see Example 4).",
"It requires all TempRels between GENERIC 3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NON-GENERIC and the other one for the GENERIC.",
"However, as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts.",
"It is reasonable to argue that the attempt to e1:restore order happened before the conflict where 51 people were e2:killed; or, 51 people had been killed but order had not been restored yet, so e1:restore is after e2:killed.",
"Similarly, in Example 2, service industries and manufacturers were originally expected to be hardest e4:hit but actually e3:showed gains, so e4:hit is before e3:showed; however, one can also argue that the two areas had showed gains but had not been hit, so e4:hit is after e3:showed.",
"Again, e5:rebuilding is a hypothetical event: \"we will act if rebuilding is true\".",
"Readers do not know for sure if \"he is already rebuilding weapons but we have no evidence\", or \"he will be building weapons in the future\", so annotators may disagree on the relation between e5:rebuilding and e6:responded.",
"Despite, importantly, minimizing missing annota-2 EventTimeCorpus (Reimers et al., 2016) is based on TimeBank, but aims at anchoring events onto explicit time expressions in each document rather than annotating TempRels between events, which can be a good complementary to other TempRel datasets.",
"3 For example, lions eat meat is GENERIC.",
"tions, the current dense scheme forces annotators to label many such ill-defined pairs, resulting in low IAA.",
"Multi-Axis Modeling Arguably, an ideal annotator may figure out the above ambiguity by him/herself and mark them as vague, but it is not a feasible requirement for all annotators to stay clear-headed for hours; let alone crowdsourcers.",
"What makes things worse is that, after annotators spend a long time figuring out these difficult cases, whether they disagree with each other or agree on the vagueness, the final decisions for such cases will still be vague.",
"As another way to handle this dilemma, TB-Dense resorted to a 80% confidence rule: annotators were allowed to choose a label if one is 80% sure that it was the writer's intent.",
"However, as pointed out by TB-Dense, annotators are likely to have rather different understandings of 80% confidence and it will still end up with disagreements.",
"In contrast to these annotation difficulties, humans can easily grasp the meaning of news articles, implying a potential gap between the difficulty of the annotation task and the one of understanding the actual meaning of the text.",
"In Examples 1-3, the writers did not intend to explain the TempRels between those pairs, and the original annotators of TimeBank 4 did not label relations between those pairs either, which indicates that both writers and readers did not think the TempRels between these pairs were crucial.",
"Instead, what is crucial in these examples is that \"Serbian police tried to restore order but killed 51 people\", that \"two areas were expected to be hit but showed gains\", and that \"if he rebuilds weapons then we will act.\"",
"To \"restore order\", to be \"hardest hit\", and \"if he was rebuilding\" were only the intention of police, the opinion of economists, and the condition to act, respectively, and whether or not they actually happen is not the focus of those writers.",
"This discussion suggests that a single axis is too restrictive to represent the complex structure of NON-GENERIC events.",
"Instead, we need a modeling which is more restrictive than a general graph so that annotators can focus on relation annotation (rather than looking for pairs first), but also more flexible than a single axis so that ill-defined 4 Recall that they were given the entire article and only salient relations would be annotated.",
"relations are not forcibly annotated.",
"Specifically, we need axes for intentions, opinions, hypotheses, etc.",
"in addition to the main axis of an article.",
"We thus argue for multi-axis modeling, as defined in Table 1 .",
"Following the proposed modeling, Examples 1-3 can be represented as in Fig.",
"1 .",
"This modeling aims at capturing what the author has explicitly expressed and it only asks annotators to look at comparable pairs, rather than forcing them to make decisions on often vaguely defined pairs.",
"In practice, we annotate one axis at a time: we first classify if an event is anchorable onto a given axis (this is also called the anchorability annotation step); then we annotate every pair of anchorable events (i.e., the relation annotation step); finally, we can move to another axis and repeat the two steps above.",
"Note that ruling out cross-axis relations is only a strategy we adopt in this paper to separate well-defined relations from ill-defined relations.",
"We do not claim that cross-axis relations are unimportant; instead, as shown in Fig.",
"2 , we think that cross-axis relations are a different semantic phenomenon that requires additional investigation.",
"Comparisons with Existing Work There have been other proposals of temporal structure modelings (Bramsen et al., 2006; Bethard et al., 2012) , but in general, the semantic phenomena handled in our work are very different and complementary to them.",
"(Bramsen et al., 2006) introduces \"temporal segments\" (a fragment of text that does not exhibit abrupt changes) in the medical domain.",
"Similarly, their temporal segments can also be considered as a special temporal structure modeling.",
"But a key difference is that (Bramsen et al., 2006) only annotates inter-segment relations, ignoring intra-segment ones.",
"Since those segments are usually large chunks of text, the semantics handled in (Bramsen et al., 2006) is in a very coarse granularity (as pointed out by (Bramsen et al., 2006) ) and is thus different from ours.",
"(Bethard et al., 2012) proposes a tree structure for children's stories, which \"typically have simpler temporal structures\", as they pointed out.",
"Moreover, in their annotation, an event can only be linked to a single nearby event, even if multiple nearby events may exist, whereas we do not have such restrictions.",
"In addition, some of the semantic phenomena in Table 1 have been discussed in existing work.",
"Here we compare with them for a better positioning of the proposed scheme.",
"Axis Projection TB-Dense handled the incomparability between main-axis events and HYPOTHESIS/NEGATION by treating an event as having occurred if the event is HYPOTHESIS/NEGATION.",
"5 In our multiaxis modeling, the strategy adopted by TB-Dense falls into a more general approach, \"axis projection\".",
"That is, projecting events across different axes to handle the incomparability between any two axes (not limited to HYPOTHE-SIS/NEGATION).",
"Axis projection works well for certain event pairs like Asian crisis and e4:hardest hit in Example 2: as in Fig.",
"1 , Asian crisis is before expected, which is again before e4:hardest hit, so Asian crisis is before e4:hardest hit.",
"Generally, however, since there is no direct evidence that can guide the projection, annotators may have different projections (imagine projecting e5:rebuilding onto the main axis: is it in the past or in the future?).",
"As a result, axis projec-tion requires many specially designed guidelines or strong external knowledge.",
"Annotators have to rigidly follow the sometimes counter-intuitive guidelines or \"guess\" a label instead of looking for evidence in the text.",
"When strong external knowledge is involved in axis projection, it becomes a reasoning process and the resulting relations are a different type.",
"For example, a reader may reason that in Example 3, it is well-known that they did \"act again\", implying his e5:rebuilding had happened and is before e6:responded.",
"Another example is in Fig.",
"2 .",
"It is obvious that relations based on these projections are not the same with and more challenging than those same-axis relations, so in the current stage, we should focus on same-axis relations only.",
"tended the conference, the projection of submit a paper onto the main axis is clearly before attended.",
"However, this projection requires strong external knowledge that a paper should be submitted before attending a conference.",
"Again, this projection is only a guess based on our external knowledge and it is still open whether the paper is submitted or not.",
"Introduction of the Orthogonal Axes Another prominent difference to earlier work is the introduction of orthogonal axes, which has not been used in any existing work as we know.",
"A special property is that the intersection event of two axes can be compared to events from both, which can sometimes bridge events, e.g., in Fig.",
"1 , Asian crisis is seemingly before hardest hit due to their connections to expected.",
"Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well.",
"However, the \"hardest hit\" in \"Asian crisis before hardest hit\" is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true.",
"Nevertheless, OPINIONS are not always true and INTENTIONS are not always fulfilled.",
"In Example 5, e9:sponsoring and e10:resolve are the opinions of the West and the speaker, respectively; whether or not they are true depends on the au-thors' implications or the readers' understandings, which is often beyond the scope of TempRel annotation.",
"6 Example 6 demonstrates a similar situation for INTENTIONS: when reading the sentence of e11:report, people are inclined to believe that it is fulfilled.",
"But if we read the sentence of e12:report, we have reason to believe that it is not.",
"When it comes to e13:tell, it is unclear if everyone told the truth.",
"The existence of such examples indicates that orthogonal axes are a better modeling for INTENTIONS and OPINIONS.",
"Example 5: Opinion events may not always be true.",
"He is ostracized by the West for (e9:sponsoring) terrorism.",
"We need to (e10:resolve) the deep-seated causes that have resulted in these problems.",
"Example 6: Intentions may not always be fulfilled.",
"A passerby called the police to (e11:report) the body.",
"A passerby called the police to (e12:report) the body.",
"Unfortunately, the line was busy.",
"I asked everyone to (e13:tell) the truth.",
"Differences from Factuality Event modality have been discussed in many existing event annotation schemes, e.g., Event Nugget (Mitamura et al., 2015) , Rich ERE , and RED.",
"Generally, an event is classified as Actual or Non-Actual, a.k.a.",
"factuality (Saurí and Pustejovsky, 2009; Lee et al., 2015) .",
"The main-axis events defined in this paper seem to be very similar to Actual events, but with several important differences: First, future events are Non-Actual because they indeed have not happened, but they may be on the main axis.",
"Second, events that are not on the main axis can also be Actual events, e.g., intentions that are fulfilled, or opinions that are true.",
"Third, as demonstrated by Examples 5-6, identifying anchorability as defined in Table 1 is relatively easy, but judging if an event actually happened is often a high-level understanding task that requires an understanding of the entire document or external knowledge.",
"Interested readers are referred to Appendix B for a detailed analysis of the difference between Anchorable (onto the main axis) and Actual on a subset of RED.",
"Interval Splitting All existing annotation schemes adopt the interval representation of events (Allen, 1984) and there are 13 relations between two intervals (for readers who are not familiar with it, please see Fig.",
"4 in the appendix).",
"To reduce the burden of annotators, existing schemes often resort to a reduced set of the 13 relations.",
"For instance, Verhagen et al.",
"(2007) Let [t 1 start , t 1 end ] and [t 2 start , t 2 end ] be the time intervals of two events (with the implicit assumption that t start t end ).",
"Instead of reducing the relations between two intervals, we try to explicitly compare the time points (see Fig.",
"3 ).",
"In this way, the label set is simply before, after and equal, 7 while the expressivity remains the same.",
"This interval splitting technique has also been used in (Raghavan et al., 2012) .",
"Figure 3 : The comparison of two event time intervals, [t 1 start , t 1 end ] and [t 2 start , t 2 end ], can be decomposed into four comparisons t 1 start vs. t 2 start , t 1 start vs. t 2 end , t 1 end vs. t 2 start , and t 1 end vs. t 2 end , without loss of generality.",
"[!",
"\"#$%# & , !",
"()* & ] [!",
"\"#$%# + , !",
"()* + ] time In addition to same expressivity, interval splitting can provide even more information when the relation between two events is vague.",
"In the conventional setting, imagine that the annotators find that the relation between two events can be either before or before and overlap.",
"Then the resulting annotation will have to be vague, although the annotators actually agree on the relation between t 1 start and t 2 start .",
"Using interval splitting, however, such information can be preserved.",
"An obvious downside of interval splitting is the increased number of annotations needed (4 point comparisons vs. 1 interval comparison).",
"In practice, however, it is usually much fewer than 4 comparisons.",
"For example, when we see t 1 end < t 2 start (as in Fig.",
"3) , the other three can be skipped because they can all be inferred.",
"Moreover, although the number of annotations is increased, the work load for human annotators may still be the same, because even in the conventional scheme, they still need to think of the relations between start-and end-points before they can make a decision.",
"Ambiguity of End-Points During our pilot annotation, the annotation quality dropped significantly when the annotators needed to reason about relations involving end-points of events.",
"Table 2 shows four metrics of task difficulty when only t 1 start vs. t 2 start or t 1 end vs. t 2 end are annotated.",
"Non-anchorable events were removed for both jobs.",
"The first two metrics, qualifying pass rate and survival rate are related to the two quality control protocols (see Sec.",
"4.1 for details).",
"We can see that when annotating the relations between end-points, only one out of ten crowdsourcers (11%) could successfully pass our qualifying test; and even if they had passed it, half of them (56%) would have been kicked out in the middle of the task.",
"The third line is the overall accuracy on gold set from all crowdsourcers (excluding those who did not pass the qualifying test), which drops from 67% to 37% when annotating end-end relations.",
"The last line is the average response time per annotation and we can see that it takes much longer to label an end-end TempRel (52s) than a start-start TempRel (33s).",
"This important discovery indicates that the TempRels between end-points is probably governed by a different linguistic phenomenon.",
"Table 2 : Annotations involving the end-points of events are found to be much harder than only comparing the start-points.",
"We hypothesize that the difficulty is a mixture of how durative events are expressed (by authors) and perceived (by readers) in natural language.",
"In cognitive psychology, Coll-Florit and Gennari (2011) discovered that human readers take longer to perceive durative events than punctual events, e.g., owe 50 bucks vs. lost 50 bucks.",
"From the writer's standpoint, durations are usually fuzzy (Schockaert and De Cock, 2008) , or assumed to be a prior knowledge of readers (e.g., college takes 4 years and watching an NBA game takes a few hours), and thus not always written explicitly.",
"Given all these reasons, we ignore the comparison of end-points in this work, although event duration is indeed, another important task.",
"Annotation Scheme Design To summarize, with the proposed multi-axis modeling (Sec.",
"2) and interval splitting (Sec.",
"3), our annotation scheme is two-step.",
"First, we mark every event candidate as being temporally Anchorable or not (based on the time axis we are working on).",
"Second, we adopt the dense annotation scheme to label TempRels only between Anchorable events.",
"Note that we only work on verb events in this paper, so non-verb event candidates are also deleted in a preprocessing step.",
"We design crowdsourcing tasks for both steps and as we show later, high crowdsourcing quality was achieved on both tasks.",
"In this section, we will discuss some practical issues.",
"Quality Control for Crowdsourcing We take advantage of the quality control feature in CrowdFlower in our crowdsourcing jobs.",
"For any job, a set of examples are annotated by experts beforehand, which is considered gold and will serve two purposes.",
"(i) Qualifying test: Any crowdsourcer who wants to work on this job has to pass with 70% accuracy on 10 questions randomly selected from the gold set.",
"(ii) Surviving test: During the annotation process, questions from the gold set will be randomly given to crowdsourcers without notice, and one has to maintain 70% accuracy on the gold set till the end of the annotation; otherwise, he or she will be forbidden from working on this job anymore and all his/her annotations will be discarded.",
"At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision.",
"Vague Relations How to handle vague relations is another issue in temporal annotation.",
"In non-dense schemes, annotators usually skip the annotation of a vague pair.",
"In dense schemes, a majority agreement rule is applied as a postprocessing step to back off a decision to vague when annotators cannot pass a majority vote , which reminds us that annotators often label a vague relation as non-vague due to lack of thinking.",
"We decide to proactively reduce the possibility of such situations.",
"As mentioned earlier, our label set for t 1 start vs. t 2 start is before, after, equal and vague.",
"We ask two questions: Q1=Is it possible that t 1 start is before t 2 start ?",
"Q2=Is it possible that t 2 start is before t 1 start ?",
"Let the an-swers be A1 and A2.",
"Then we have a oneto-one mapping as follows: A1=A2=yes7 !vague, A1=A2=no7 !equal, A1=yes, A2=no7 !before, and A1=no, A2=yes7 !after.",
"An advantage is that one will be prompted to think about all possibilities, thus reducing the chance of overlook.",
"Finally, the annotation interface we used is shown in Appendix C. Corpus Statistics and Quality In this section, we first focus on annotations on the main axis, which is usually the primary storyline and thus has most events.",
"Before launching the crowdsourcing tasks, we checked the IAA between two experts on a subset of TB-Dense (about 100 events and 400 relations).",
"A Cohen's Kappa of .85 was achieved in the first step: anchorability annotation.",
"Only those events that both experts labeled Anchorable were kept before they moved onto the second step: relation annotation, for which the Cohen's Kappa was .90 for Q1 and .87 for Q2.",
"Table 3 furthermore shows the distribution, Cohen's Kappa, and F 1 of each label.",
"We can see the Kappa and F 1 of vague (=.75, F 1 =.81) are generally lower than those of the other labels, confirming that temporal vagueness is a more difficult semantic phenomenon.",
"Nevertheless, the overall IAA shown in Table 3 is a significant improvement compared to existing datasets.",
"With the improved IAA confirmed by experts, we sequentially launched the two-step crowdsourcing tasks through CrowdFlower on top of the same 36 documents of TB-Dense.",
"To evaluate how well the crowdsourcers performed on our task, we calculate two quality metrics: accuracy on the gold set and the Worker Agreement with Aggregate (WAWA).",
"WAWA indicates the average number of crowdsourcers' responses agreed with the aggregate answer (we used majority aggregation for each question).",
"For example, if N individual responses were obtained in total, and n of them were correct when compared to the aggregate answer, then WAWA is simply n/N .",
"In the first step, crowdsourcers labeled 28% of the events as Non-Anchorable to the main axis, with an accuracy on the gold of .86 and a WAWA of .79.",
"With Non-Anchorable events filtered, the relation annotation step was launched as another crowdsourcing task.",
"The label distribution is b=.",
"50, a=.28, e=.03, and v=.19 (consistent with Table 3 ).",
"In Table 4 , we show the annotation quality of this step using accuracy on the gold set and WAWA.",
"We can see that the crowdsourcers achieved a very good performance on the gold set, indicating that they are consistent with the authors who created the gold set; these crowdsourcers also achieved a high-level agreement under the WAWA metric, indicating that they are consistent among themselves.",
"These two metrics indicate that the annotation task is now well-defined and easy to understand even by non-experts.",
"Table 4 : Quality analysis of the relation annotation step of MATRES.",
"\"Q1\" and \"Q2\" refer to the two questions crowdsourcers were asked (see Sec.",
"4.2 for details).",
"Line 1 measures the level of consistency between crowdsourcers and the authors and line 2 measures the level of consistency among the crowdsourcers themselves.",
"We continued to annotate INTENTION and OPINION which create orthogonal branches on the main axis.",
"In the first step, crowdsourcers achieved an accuracy on gold of .82 and a WAWA of .89.",
"Since only 16% of the events are in this category and these axes are usually very short (e.g., allocate funds to build a museum.",
"), the annotation task is relatively small and two experts took the second step and achieved an agreement of .86 (F 1 ).",
"We name our new dataset MATRES for Multi-Axis Temporal RElations for Start-points.",
"Each individual judgement cost us $0.01 and MATRES in total cost about $400 for 36 documents.",
"Comparison to TB-Dense To get another checkpoint of the quality of the new dataset, we compare with the annotations of TB-Dense.",
"TB-Dense has 1.1K verb events, between which 3.4K event-event (EE) relations are annotated.",
"In the new dataset, 72% of the events (0.8K) are anchored onto the main axis, resulting in 1.6K EE relations, and 16% (0.2K) are anchored onto orthogonal axes, resulting in 0.2K EE relations.",
"The following comparison is based on the 1.8K EE relations in common.",
"Moreover, since TB-Dense annotations are for intervals instead of start-points only, we converted TB-Dense's interval relations to start-point relations (e.g., if A includes B, then t A start is before t B start ).",
"The confusion matrix is shown in Table 5 .",
"A few remarks about how to understand it: First, when TB-Dense labels before or after, MATRES also has a high-probability of having the same label (b=455/513=.89, a=309/438=.71); when MATRES labels vague, TB-Dense is also very likely to label vague (v=192/312=.62).",
"This indicates the high agreement level between the two datasets if the interval-or point-based annotation difference is ruled out.",
"Second, many vague relations in TB-Dense are labeled as before, after or equal in MATRES.",
"This is expected because TB-Dense annotates relations between intervals, while MATRES annotates start-points.",
"When durative events are involved, the problem usually becomes more difficult and interval-based annotation is more likely to label vague (see earlier discussions in Sec.",
"3).",
"Example 7 shows three typical cases, where e14:became, e17:backed, e18:rose and e19:extending can be considered durative.",
"If only their start-points are considered, the crowdsourcers were correct in labeling e14 before e15, e16 after e17, and e18 equal to e19, although TB-Dense says vague for all of them.",
"Third, equal seems to be the relation that the two dataset mostly disagree on, which is probably due to crowdsourcers' lack of understanding in time granularity and event coreference.",
"Although equal relations only constitutes a small portion in all relations, it needs further investigation.",
"Baseline System We develop a baseline system for TempRel extraction on MATRES, assuming that all the events and axes are given.",
"The following commonly-Example 7: Typical cases that TB-Dense annotated vague but MATRES annotated before, after, and equal, respectively.",
"At one point , when it (e14:became) clear controllers could not contact the plane, someone (e15:said) a prayer.",
"TB-Dense: vague; MATRES: before The US is bolstering its military presence in the gulf, as President Clinton (e16:discussed) the Iraq crisis with the one ally who has (e17:backed) his threat of force, British prime minister Tony Blair.",
"TB-Dense: vague; MATRES: after Average hourly earnings of nonsupervisory employees (e18:rose) to $12.51.",
"The gain left wages 3.8 percent higher than a year earlier, (e19:extending) a trend that has given back to workers some of the earning power they lost to inflation in the last decade.",
"TB-Dense: vague; MATRES: equal used features for each event pair are used: (i) The part-of-speech (POS) tags of each individual event and of its neighboring three words.",
"(ii) The sentence and token distance between the two events.",
"(iii) The appearance of any modal verb between the two event mentions in text (i.e., will, would, can, could, may and might).",
"(iv) The appearance of any temporal connectives between the two event mentions (e.g., before, after and since).",
"(v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998) .",
"(vi) Whether the input event mentions have a common derivational form derived from WordNet.",
"(vii) The head words of the preposition phrases that cover each event, respectively.",
"And (viii) event properties such as Aspect, Modality, and Polarity that come with the TimeBank dataset and are commonly used as features.",
"The proposed baseline system uses the averaged perceptron algorithm to classify the relation between each event pair into one of the four relation types.",
"We adopted the same train/dev/test split of TB-Dense, where there are 22 documents in train, 5 in dev, and 9 in test.",
"Parameters were tuned on the train-set to maximize its F 1 on the dev-set, after which the classifier was retrained on the union of train and dev.",
"A detailed analysis of the baseline system is provided in Table 6 .",
"The performance on equal and vague is lower than on before and after, probably due to shortage in these labels in the training data and the inherent difficulty in event coreference and temporal vagueness.",
"We can see, though, that the overall performance on MATRES is much better than those in the literature for TempRel extraction, which used to be in the low 50's Ning et al., 2017 ).",
"The same system was also retrained and tested on the original annotations of TB-Dense (Line \"Original\"), which confirms the significant improvement if the proposed annotation scheme is used.",
"Note that we do not mean to say that the proposed baseline system itself is better than other existing algorithms, but rather that the proposed annotation scheme and the resulting dataset lead to better defined machine learning tasks.",
"In the future, more data can be collected and used with advanced techniques such as ILP (Do et al., 2012) , structured learning (Ning et al., 2017) or multi-sieve .",
"Table 6 : Performance of the proposed baseline system on MATRES.",
"Line \"Original\" is the same system retrained on the original TB-Dense and tested on the same subset of event pairs.",
"Due to the limited number of equal examples, the system did not make any equal predictions on the testset.",
"Conclusion This paper proposes a new scheme for TempRel annotation between events, simplifying the task by focusing on a single time axis at a time.",
"We have also identified that end-points of events is a major source of confusion during annotation due to reasons beyond the scope of TempRel annotation, and proposed to focus on start-points only and handle the end-points issue in further investigation (e.g., in event duration annotation tasks).",
"Pilot study by expert annotators shows significant IAA improvements compared to literature values, indicating a better task definition under the proposed scheme.",
"This further enables the usage of crowdsourcing to collect a new dataset, MATRES, at a lower time cost.",
"Analysis shows that MATRES, albeit crowdsourced, has achieved a reasonably good agreement level, as confirmed by its performance on the gold set (agreement with the authors), the WAWA metric (agreement with the crowdsourcers themselves), and consistency with TB-Dense (agreement with an existing dataset).",
"Given the fact that existing schemes suffer from low IAAs and lack of data, we hope that the findings in this work would provide a good start towards understanding more sophisticated semantic phenomena in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.3.1",
"2.3.2",
"2.3.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Temporal Structure of Events",
"Motivation",
"Multi-Axis Modeling",
"Comparisons with Existing Work",
"Axis Projection",
"Introduction of the Orthogonal Axes",
"Differences from Factuality",
"Interval Splitting",
"Ambiguity of End-Points",
"Annotation Scheme Design",
"Quality Control for Crowdsourcing",
"Vague Relations",
"Corpus Statistics and Quality",
"Comparison to TB-Dense",
"Baseline System",
"Conclusion"
]
} | GEM-SciDuet-train-123#paper-1336#slide-15 | Result on our new dataset | We implemented a baseline system, using conventional features and the sparse averaged perceptron algorithm
The overall performance on the proposed dataset is much better than those in the literature for TempRel extraction, which used
We do NOT mean that the proposed baseline is better than other existing algorithms
Rather, the proposed annotation scheme better defines the machine learning task.
Training Test Annotation Training Set Test Set P R F P R F
TBDense Same-axis & Cross-axis Same-axis | We implemented a baseline system, using conventional features and the sparse averaged perceptron algorithm
The overall performance on the proposed dataset is much better than those in the literature for TempRel extraction, which used
We do NOT mean that the proposed baseline is better than other existing algorithms
Rather, the proposed annotation scheme better defines the machine learning task.
Training Test Annotation Training Set Test Set P R F P R F
TBDense Same-axis & Cross-axis Same-axis | [] |
GEM-SciDuet-train-123#paper-1336#slide-16 | 1336 | A Multi-Axis Annotation Scheme for Event Temporal Relations | Existing temporal relation (TempRel) annotation schemes often have low interannotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition. This paper proposes a new multi-axis modeling to better capture the temporal structure of events. In addition, we identify that event end-points are a major source of confusion in annotation, so we also propose to annotate TempRels based on start-points only. A pilot expert annotation effort using the proposed scheme shows significant improvement in IAA from the conventional 60's to 80's (Cohen's Kappa). This better-defined annotation scheme further enables the use of crowdsourcing to alleviate the labor intensity for each annotator. We hope that this work can foster more interesting studies towards event understanding. 1 | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259
],
"paper_content_text": [
"Introduction Temporal relation (TempRel) extraction is an important task for event understanding, and it has drawn much attention in the natural language processing (NLP) community recently (UzZaman et al., 2013; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015 Bethard et al., , 2016 Bethard et al., , 2017 Leeuwenberg and Moens, 2017; Ning et al., 2017 Ning et al., , 2018a .",
"Initiated by TimeBank (TB) (Pustejovsky et al., 2003b) , a number of TempRel datasets have been collected, including but not limited to the verbclause augmentation to TB (Bethard et al., 2007) , TempEval1-3 (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , TimeBank-Dense (TB-Dense) , EventTimeCorpus (Reimers et al., 2016) , and datasets with both temporal and other types of relations (e.g., coreference and causality) such as CaTeRs (Mostafazadeh et al., 2016) and RED (O'Gorman et al., 2016) .",
"These datasets were annotated by experts, but most still suffered from low inter-annotator agreements (IAA).",
"For instance, the IAAs of TB-Dense, RED and THYME-TimeML (Styler IV et al., 2014) were only below or near 60% (given that events are already annotated).",
"Since a low IAA usually indicates that the task is difficult even for humans (see Examples 1-3), the community has been looking into ways to simplify the task, by reducing the label set, and by breaking up the overall, complex task into subtasks (e.g., getting agreement on which event pairs should have a relation, and then what that relation should be) (Mostafazadeh et al., 2016; O'Gorman et al., 2016) .",
"In contrast to other existing datasets, Bethard et al.",
"(2007) achieved an agreement as high as 90%, but the scope of its annotation was narrowed down to a very special verb-clause structure.",
"(e1, e2), (e3, e4), and (e5, e6): TempRels that are difficult even for humans.",
"Note that only relevant events are highlighted here.",
"Example 1: Serbian police tried to eliminate the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Example 2: Service industries (e3:showed) solid job gains, as did manufacturers, two areas expected to be hardest (e4:hit) when the effects of the Asian crisis hit the American economy.",
"Example 3: We will act again if we have evidence he is (e5:rebuilding) his weapons of mass destruction capabilities, senior officials say.",
"In a bit of television diplomacy, Iraq's deputy foreign minister (e6:responded) from Baghdad in less than one hour, saying that .",
".",
".",
"This paper proposes a new approach to handling these issues in TempRel annotation.",
"First, we introduce multi-axis modeling to represent the temporal structure of events, based on which we anchor events to different semantic axes; only events from the same axis will then be temporally compared (Sec.",
"2).",
"As explained later, those event pairs in Examples 1-3 are difficult because they represent different semantic phenomena and belong to different axes.",
"Second, while we represent an event pair using two time intervals (say, [t 1 start , t 1 end ] and [t 2 start , t 2 end ]), we suggest that comparisons involving end-points (e.g., t 1 end vs. t 2 end ) are typically more difficult than comparing start-points (i.e., t 1 start vs. t 2 start ); we attribute this to the ambiguity of expressing and perceiving durations of events (Coll-Florit and Gennari, 2011) .",
"We believe that this is an important consideration, and we propose in Sec.",
"3 that TempRel annotation should focus on start-points.",
"Using the proposed annotation scheme, a pilot study done by experts achieved a high IAA of .84 (Cohen's Kappa) on a subset of TB-Dense, in contrast to the conventional 60's.",
"In addition to the low IAA issue, TempRel annotation is also known to be labor intensive.",
"Our third contribution is that we facilitate, for the first time, the use of crowdsourcing to collect a new, high quality (under multiple metrics explained later) TempRel dataset.",
"We explain how the crowdsourcing quality was controlled and how vague relations were handled in Sec.",
"4, and present some statistics and the quality of the new dataset in Sec.",
"5.",
"A baseline system is also shown to achieve much better performance on the new dataset, when compared with system performance in the literature (Sec.",
"6).",
"The paper's results are very encouraging and hopefully, this work would significantly benefit research in this area.",
"Temporal Structure of Events Given a set of events, one important question in designing the TempRel annotation task is: which pairs of events should have a relation?",
"The answer to it depends on the modeling of the overall temporal structure of events.",
"Motivation TimeBank (Pustejovsky et al., 2003b) laid the foundation for many later TempRel corpora, e.g., (Bethard et al., 2007; UzZaman et al., 2013; Cas-sidy et al., 2014) .",
"2 In TimeBank, the annotators were allowed to label TempRels between any pairs of events.",
"This setup models the overall structure of events using a general graph, which made annotators inadvertently overlook some pairs, resulting in low IAAs and many false negatives.",
"Example 4: Dense Annotation Scheme.",
"Serbian police (e7:tried) to (e8:eliminate) the proindependence Kosovo Liberation Army and (e1:restore) order.",
"At least 51 people were (e2:killed) in clashes between Serb police and ethnic Albanians in the troubled region.",
"Given 4 NON-GENERIC events above, the dense scheme presents 6 pairs to annotators one by one: (e7, e8), (e7, e1), (e7, e2), (e8, e1), (e8, e2), and (e1, e2).",
"Apparently, not all pairs are well-defined, e.g., (e8, e2) and (e1, e2), but annotators are forced to label all of them.",
"To address this issue, proposed a dense annotation scheme, TB-Dense, which annotates all event pairs within a sliding, two-sentence window (see Example 4).",
"It requires all TempRels between GENERIC 3 and NON-GENERIC events to be labeled as vague, which conceptually models the overall structure by two disjoint time-axes: one for the NON-GENERIC and the other one for the GENERIC.",
"However, as shown by Examples 1-3 in which the highlighted events are NON-GENERIC, the TempRels may still be ill-defined: In Example 1, Serbian police tried to restore order but ended up with conflicts.",
"It is reasonable to argue that the attempt to e1:restore order happened before the conflict where 51 people were e2:killed; or, 51 people had been killed but order had not been restored yet, so e1:restore is after e2:killed.",
"Similarly, in Example 2, service industries and manufacturers were originally expected to be hardest e4:hit but actually e3:showed gains, so e4:hit is before e3:showed; however, one can also argue that the two areas had showed gains but had not been hit, so e4:hit is after e3:showed.",
"Again, e5:rebuilding is a hypothetical event: \"we will act if rebuilding is true\".",
"Readers do not know for sure if \"he is already rebuilding weapons but we have no evidence\", or \"he will be building weapons in the future\", so annotators may disagree on the relation between e5:rebuilding and e6:responded.",
"Despite, importantly, minimizing missing annota-2 EventTimeCorpus (Reimers et al., 2016) is based on TimeBank, but aims at anchoring events onto explicit time expressions in each document rather than annotating TempRels between events, which can be a good complementary to other TempRel datasets.",
"3 For example, lions eat meat is GENERIC.",
"tions, the current dense scheme forces annotators to label many such ill-defined pairs, resulting in low IAA.",
"Multi-Axis Modeling Arguably, an ideal annotator may figure out the above ambiguity by him/herself and mark them as vague, but it is not a feasible requirement for all annotators to stay clear-headed for hours; let alone crowdsourcers.",
"What makes things worse is that, after annotators spend a long time figuring out these difficult cases, whether they disagree with each other or agree on the vagueness, the final decisions for such cases will still be vague.",
"As another way to handle this dilemma, TB-Dense resorted to a 80% confidence rule: annotators were allowed to choose a label if one is 80% sure that it was the writer's intent.",
"However, as pointed out by TB-Dense, annotators are likely to have rather different understandings of 80% confidence and it will still end up with disagreements.",
"In contrast to these annotation difficulties, humans can easily grasp the meaning of news articles, implying a potential gap between the difficulty of the annotation task and the one of understanding the actual meaning of the text.",
"In Examples 1-3, the writers did not intend to explain the TempRels between those pairs, and the original annotators of TimeBank 4 did not label relations between those pairs either, which indicates that both writers and readers did not think the TempRels between these pairs were crucial.",
"Instead, what is crucial in these examples is that \"Serbian police tried to restore order but killed 51 people\", that \"two areas were expected to be hit but showed gains\", and that \"if he rebuilds weapons then we will act.\"",
"To \"restore order\", to be \"hardest hit\", and \"if he was rebuilding\" were only the intention of police, the opinion of economists, and the condition to act, respectively, and whether or not they actually happen is not the focus of those writers.",
"This discussion suggests that a single axis is too restrictive to represent the complex structure of NON-GENERIC events.",
"Instead, we need a modeling which is more restrictive than a general graph so that annotators can focus on relation annotation (rather than looking for pairs first), but also more flexible than a single axis so that ill-defined 4 Recall that they were given the entire article and only salient relations would be annotated.",
"relations are not forcibly annotated.",
"Specifically, we need axes for intentions, opinions, hypotheses, etc.",
"in addition to the main axis of an article.",
"We thus argue for multi-axis modeling, as defined in Table 1 .",
"Following the proposed modeling, Examples 1-3 can be represented as in Fig.",
"1 .",
"This modeling aims at capturing what the author has explicitly expressed and it only asks annotators to look at comparable pairs, rather than forcing them to make decisions on often vaguely defined pairs.",
"In practice, we annotate one axis at a time: we first classify if an event is anchorable onto a given axis (this is also called the anchorability annotation step); then we annotate every pair of anchorable events (i.e., the relation annotation step); finally, we can move to another axis and repeat the two steps above.",
"Note that ruling out cross-axis relations is only a strategy we adopt in this paper to separate well-defined relations from ill-defined relations.",
"We do not claim that cross-axis relations are unimportant; instead, as shown in Fig.",
"2 , we think that cross-axis relations are a different semantic phenomenon that requires additional investigation.",
"Comparisons with Existing Work There have been other proposals of temporal structure modelings (Bramsen et al., 2006; Bethard et al., 2012) , but in general, the semantic phenomena handled in our work are very different and complementary to them.",
"(Bramsen et al., 2006) introduces \"temporal segments\" (a fragment of text that does not exhibit abrupt changes) in the medical domain.",
"Similarly, their temporal segments can also be considered as a special temporal structure modeling.",
"But a key difference is that (Bramsen et al., 2006) only annotates inter-segment relations, ignoring intra-segment ones.",
"Since those segments are usually large chunks of text, the semantics handled in (Bramsen et al., 2006) is in a very coarse granularity (as pointed out by (Bramsen et al., 2006) ) and is thus different from ours.",
"(Bethard et al., 2012) proposes a tree structure for children's stories, which \"typically have simpler temporal structures\", as they pointed out.",
"Moreover, in their annotation, an event can only be linked to a single nearby event, even if multiple nearby events may exist, whereas we do not have such restrictions.",
"In addition, some of the semantic phenomena in Table 1 have been discussed in existing work.",
"Here we compare with them for a better positioning of the proposed scheme.",
"Axis Projection TB-Dense handled the incomparability between main-axis events and HYPOTHESIS/NEGATION by treating an event as having occurred if the event is HYPOTHESIS/NEGATION.",
"5 In our multiaxis modeling, the strategy adopted by TB-Dense falls into a more general approach, \"axis projection\".",
"That is, projecting events across different axes to handle the incomparability between any two axes (not limited to HYPOTHE-SIS/NEGATION).",
"Axis projection works well for certain event pairs like Asian crisis and e4:hardest hit in Example 2: as in Fig.",
"1 , Asian crisis is before expected, which is again before e4:hardest hit, so Asian crisis is before e4:hardest hit.",
"Generally, however, since there is no direct evidence that can guide the projection, annotators may have different projections (imagine projecting e5:rebuilding onto the main axis: is it in the past or in the future?).",
"As a result, axis projec-tion requires many specially designed guidelines or strong external knowledge.",
"Annotators have to rigidly follow the sometimes counter-intuitive guidelines or \"guess\" a label instead of looking for evidence in the text.",
"When strong external knowledge is involved in axis projection, it becomes a reasoning process and the resulting relations are a different type.",
"For example, a reader may reason that in Example 3, it is well-known that they did \"act again\", implying his e5:rebuilding had happened and is before e6:responded.",
"Another example is in Fig.",
"2 .",
"It is obvious that relations based on these projections are not the same with and more challenging than those same-axis relations, so in the current stage, we should focus on same-axis relations only.",
"tended the conference, the projection of submit a paper onto the main axis is clearly before attended.",
"However, this projection requires strong external knowledge that a paper should be submitted before attending a conference.",
"Again, this projection is only a guess based on our external knowledge and it is still open whether the paper is submitted or not.",
"Introduction of the Orthogonal Axes Another prominent difference to earlier work is the introduction of orthogonal axes, which has not been used in any existing work as we know.",
"A special property is that the intersection event of two axes can be compared to events from both, which can sometimes bridge events, e.g., in Fig.",
"1 , Asian crisis is seemingly before hardest hit due to their connections to expected.",
"Since Asian crisis is on the main axis, it seems that e4:hardest hit is on the main axis as well.",
"However, the \"hardest hit\" in \"Asian crisis before hardest hit\" is only a projection of the original e4:hardest hit onto the real axis and is valid only when this OPINION is true.",
"Nevertheless, OPINIONS are not always true and INTENTIONS are not always fulfilled.",
"In Example 5, e9:sponsoring and e10:resolve are the opinions of the West and the speaker, respectively; whether or not they are true depends on the au-thors' implications or the readers' understandings, which is often beyond the scope of TempRel annotation.",
"6 Example 6 demonstrates a similar situation for INTENTIONS: when reading the sentence of e11:report, people are inclined to believe that it is fulfilled.",
"But if we read the sentence of e12:report, we have reason to believe that it is not.",
"When it comes to e13:tell, it is unclear if everyone told the truth.",
"The existence of such examples indicates that orthogonal axes are a better modeling for INTENTIONS and OPINIONS.",
"Example 5: Opinion events may not always be true.",
"He is ostracized by the West for (e9:sponsoring) terrorism.",
"We need to (e10:resolve) the deep-seated causes that have resulted in these problems.",
"Example 6: Intentions may not always be fulfilled.",
"A passerby called the police to (e11:report) the body.",
"A passerby called the police to (e12:report) the body.",
"Unfortunately, the line was busy.",
"I asked everyone to (e13:tell) the truth.",
"Differences from Factuality Event modality have been discussed in many existing event annotation schemes, e.g., Event Nugget (Mitamura et al., 2015) , Rich ERE , and RED.",
"Generally, an event is classified as Actual or Non-Actual, a.k.a.",
"factuality (Saurí and Pustejovsky, 2009; Lee et al., 2015) .",
"The main-axis events defined in this paper seem to be very similar to Actual events, but with several important differences: First, future events are Non-Actual because they indeed have not happened, but they may be on the main axis.",
"Second, events that are not on the main axis can also be Actual events, e.g., intentions that are fulfilled, or opinions that are true.",
"Third, as demonstrated by Examples 5-6, identifying anchorability as defined in Table 1 is relatively easy, but judging if an event actually happened is often a high-level understanding task that requires an understanding of the entire document or external knowledge.",
"Interested readers are referred to Appendix B for a detailed analysis of the difference between Anchorable (onto the main axis) and Actual on a subset of RED.",
"Interval Splitting All existing annotation schemes adopt the interval representation of events (Allen, 1984) and there are 13 relations between two intervals (for readers who are not familiar with it, please see Fig.",
"4 in the appendix).",
"To reduce the burden of annotators, existing schemes often resort to a reduced set of the 13 relations.",
"For instance, Verhagen et al.",
"(2007) Let [t 1 start , t 1 end ] and [t 2 start , t 2 end ] be the time intervals of two events (with the implicit assumption that t start t end ).",
"Instead of reducing the relations between two intervals, we try to explicitly compare the time points (see Fig.",
"3 ).",
"In this way, the label set is simply before, after and equal, 7 while the expressivity remains the same.",
"This interval splitting technique has also been used in (Raghavan et al., 2012) .",
"Figure 3 : The comparison of two event time intervals, [t 1 start , t 1 end ] and [t 2 start , t 2 end ], can be decomposed into four comparisons t 1 start vs. t 2 start , t 1 start vs. t 2 end , t 1 end vs. t 2 start , and t 1 end vs. t 2 end , without loss of generality.",
"[!",
"\"#$%# & , !",
"()* & ] [!",
"\"#$%# + , !",
"()* + ] time In addition to same expressivity, interval splitting can provide even more information when the relation between two events is vague.",
"In the conventional setting, imagine that the annotators find that the relation between two events can be either before or before and overlap.",
"Then the resulting annotation will have to be vague, although the annotators actually agree on the relation between t 1 start and t 2 start .",
"Using interval splitting, however, such information can be preserved.",
"An obvious downside of interval splitting is the increased number of annotations needed (4 point comparisons vs. 1 interval comparison).",
"In practice, however, it is usually much fewer than 4 comparisons.",
"For example, when we see t 1 end < t 2 start (as in Fig.",
"3) , the other three can be skipped because they can all be inferred.",
"Moreover, although the number of annotations is increased, the work load for human annotators may still be the same, because even in the conventional scheme, they still need to think of the relations between start-and end-points before they can make a decision.",
"Ambiguity of End-Points During our pilot annotation, the annotation quality dropped significantly when the annotators needed to reason about relations involving end-points of events.",
"Table 2 shows four metrics of task difficulty when only t 1 start vs. t 2 start or t 1 end vs. t 2 end are annotated.",
"Non-anchorable events were removed for both jobs.",
"The first two metrics, qualifying pass rate and survival rate are related to the two quality control protocols (see Sec.",
"4.1 for details).",
"We can see that when annotating the relations between end-points, only one out of ten crowdsourcers (11%) could successfully pass our qualifying test; and even if they had passed it, half of them (56%) would have been kicked out in the middle of the task.",
"The third line is the overall accuracy on gold set from all crowdsourcers (excluding those who did not pass the qualifying test), which drops from 67% to 37% when annotating end-end relations.",
"The last line is the average response time per annotation and we can see that it takes much longer to label an end-end TempRel (52s) than a start-start TempRel (33s).",
"This important discovery indicates that the TempRels between end-points is probably governed by a different linguistic phenomenon.",
"Table 2 : Annotations involving the end-points of events are found to be much harder than only comparing the start-points.",
"We hypothesize that the difficulty is a mixture of how durative events are expressed (by authors) and perceived (by readers) in natural language.",
"In cognitive psychology, Coll-Florit and Gennari (2011) discovered that human readers take longer to perceive durative events than punctual events, e.g., owe 50 bucks vs. lost 50 bucks.",
"From the writer's standpoint, durations are usually fuzzy (Schockaert and De Cock, 2008) , or assumed to be a prior knowledge of readers (e.g., college takes 4 years and watching an NBA game takes a few hours), and thus not always written explicitly.",
"Given all these reasons, we ignore the comparison of end-points in this work, although event duration is indeed, another important task.",
"Annotation Scheme Design To summarize, with the proposed multi-axis modeling (Sec.",
"2) and interval splitting (Sec.",
"3), our annotation scheme is two-step.",
"First, we mark every event candidate as being temporally Anchorable or not (based on the time axis we are working on).",
"Second, we adopt the dense annotation scheme to label TempRels only between Anchorable events.",
"Note that we only work on verb events in this paper, so non-verb event candidates are also deleted in a preprocessing step.",
"We design crowdsourcing tasks for both steps and as we show later, high crowdsourcing quality was achieved on both tasks.",
"In this section, we will discuss some practical issues.",
"Quality Control for Crowdsourcing We take advantage of the quality control feature in CrowdFlower in our crowdsourcing jobs.",
"For any job, a set of examples are annotated by experts beforehand, which is considered gold and will serve two purposes.",
"(i) Qualifying test: Any crowdsourcer who wants to work on this job has to pass with 70% accuracy on 10 questions randomly selected from the gold set.",
"(ii) Surviving test: During the annotation process, questions from the gold set will be randomly given to crowdsourcers without notice, and one has to maintain 70% accuracy on the gold set till the end of the annotation; otherwise, he or she will be forbidden from working on this job anymore and all his/her annotations will be discarded.",
"At least 5 different annotators are required for every judgement and by default, the majority vote will be the final decision.",
"Vague Relations How to handle vague relations is another issue in temporal annotation.",
"In non-dense schemes, annotators usually skip the annotation of a vague pair.",
"In dense schemes, a majority agreement rule is applied as a postprocessing step to back off a decision to vague when annotators cannot pass a majority vote , which reminds us that annotators often label a vague relation as non-vague due to lack of thinking.",
"We decide to proactively reduce the possibility of such situations.",
"As mentioned earlier, our label set for t 1 start vs. t 2 start is before, after, equal and vague.",
"We ask two questions: Q1=Is it possible that t 1 start is before t 2 start ?",
"Q2=Is it possible that t 2 start is before t 1 start ?",
"Let the an-swers be A1 and A2.",
"Then we have a oneto-one mapping as follows: A1=A2=yes7 !vague, A1=A2=no7 !equal, A1=yes, A2=no7 !before, and A1=no, A2=yes7 !after.",
"An advantage is that one will be prompted to think about all possibilities, thus reducing the chance of overlook.",
"Finally, the annotation interface we used is shown in Appendix C. Corpus Statistics and Quality In this section, we first focus on annotations on the main axis, which is usually the primary storyline and thus has most events.",
"Before launching the crowdsourcing tasks, we checked the IAA between two experts on a subset of TB-Dense (about 100 events and 400 relations).",
"A Cohen's Kappa of .85 was achieved in the first step: anchorability annotation.",
"Only those events that both experts labeled Anchorable were kept before they moved onto the second step: relation annotation, for which the Cohen's Kappa was .90 for Q1 and .87 for Q2.",
"Table 3 furthermore shows the distribution, Cohen's Kappa, and F 1 of each label.",
"We can see the Kappa and F 1 of vague (=.75, F 1 =.81) are generally lower than those of the other labels, confirming that temporal vagueness is a more difficult semantic phenomenon.",
"Nevertheless, the overall IAA shown in Table 3 is a significant improvement compared to existing datasets.",
"With the improved IAA confirmed by experts, we sequentially launched the two-step crowdsourcing tasks through CrowdFlower on top of the same 36 documents of TB-Dense.",
"To evaluate how well the crowdsourcers performed on our task, we calculate two quality metrics: accuracy on the gold set and the Worker Agreement with Aggregate (WAWA).",
"WAWA indicates the average number of crowdsourcers' responses agreed with the aggregate answer (we used majority aggregation for each question).",
"For example, if N individual responses were obtained in total, and n of them were correct when compared to the aggregate answer, then WAWA is simply n/N .",
"In the first step, crowdsourcers labeled 28% of the events as Non-Anchorable to the main axis, with an accuracy on the gold of .86 and a WAWA of .79.",
"With Non-Anchorable events filtered, the relation annotation step was launched as another crowdsourcing task.",
"The label distribution is b=.",
"50, a=.28, e=.03, and v=.19 (consistent with Table 3 ).",
"In Table 4 , we show the annotation quality of this step using accuracy on the gold set and WAWA.",
"We can see that the crowdsourcers achieved a very good performance on the gold set, indicating that they are consistent with the authors who created the gold set; these crowdsourcers also achieved a high-level agreement under the WAWA metric, indicating that they are consistent among themselves.",
"These two metrics indicate that the annotation task is now well-defined and easy to understand even by non-experts.",
"Table 4 : Quality analysis of the relation annotation step of MATRES.",
"\"Q1\" and \"Q2\" refer to the two questions crowdsourcers were asked (see Sec.",
"4.2 for details).",
"Line 1 measures the level of consistency between crowdsourcers and the authors and line 2 measures the level of consistency among the crowdsourcers themselves.",
"We continued to annotate INTENTION and OPINION which create orthogonal branches on the main axis.",
"In the first step, crowdsourcers achieved an accuracy on gold of .82 and a WAWA of .89.",
"Since only 16% of the events are in this category and these axes are usually very short (e.g., allocate funds to build a museum.",
"), the annotation task is relatively small and two experts took the second step and achieved an agreement of .86 (F 1 ).",
"We name our new dataset MATRES for Multi-Axis Temporal RElations for Start-points.",
"Each individual judgement cost us $0.01 and MATRES in total cost about $400 for 36 documents.",
"Comparison to TB-Dense To get another checkpoint of the quality of the new dataset, we compare with the annotations of TB-Dense.",
"TB-Dense has 1.1K verb events, between which 3.4K event-event (EE) relations are annotated.",
"In the new dataset, 72% of the events (0.8K) are anchored onto the main axis, resulting in 1.6K EE relations, and 16% (0.2K) are anchored onto orthogonal axes, resulting in 0.2K EE relations.",
"The following comparison is based on the 1.8K EE relations in common.",
"Moreover, since TB-Dense annotations are for intervals instead of start-points only, we converted TB-Dense's interval relations to start-point relations (e.g., if A includes B, then t A start is before t B start ).",
"The confusion matrix is shown in Table 5 .",
"A few remarks about how to understand it: First, when TB-Dense labels before or after, MATRES also has a high-probability of having the same label (b=455/513=.89, a=309/438=.71); when MATRES labels vague, TB-Dense is also very likely to label vague (v=192/312=.62).",
"This indicates the high agreement level between the two datasets if the interval-or point-based annotation difference is ruled out.",
"Second, many vague relations in TB-Dense are labeled as before, after or equal in MATRES.",
"This is expected because TB-Dense annotates relations between intervals, while MATRES annotates start-points.",
"When durative events are involved, the problem usually becomes more difficult and interval-based annotation is more likely to label vague (see earlier discussions in Sec.",
"3).",
"Example 7 shows three typical cases, where e14:became, e17:backed, e18:rose and e19:extending can be considered durative.",
"If only their start-points are considered, the crowdsourcers were correct in labeling e14 before e15, e16 after e17, and e18 equal to e19, although TB-Dense says vague for all of them.",
"Third, equal seems to be the relation that the two dataset mostly disagree on, which is probably due to crowdsourcers' lack of understanding in time granularity and event coreference.",
"Although equal relations only constitutes a small portion in all relations, it needs further investigation.",
"Baseline System We develop a baseline system for TempRel extraction on MATRES, assuming that all the events and axes are given.",
"The following commonly-Example 7: Typical cases that TB-Dense annotated vague but MATRES annotated before, after, and equal, respectively.",
"At one point , when it (e14:became) clear controllers could not contact the plane, someone (e15:said) a prayer.",
"TB-Dense: vague; MATRES: before The US is bolstering its military presence in the gulf, as President Clinton (e16:discussed) the Iraq crisis with the one ally who has (e17:backed) his threat of force, British prime minister Tony Blair.",
"TB-Dense: vague; MATRES: after Average hourly earnings of nonsupervisory employees (e18:rose) to $12.51.",
"The gain left wages 3.8 percent higher than a year earlier, (e19:extending) a trend that has given back to workers some of the earning power they lost to inflation in the last decade.",
"TB-Dense: vague; MATRES: equal used features for each event pair are used: (i) The part-of-speech (POS) tags of each individual event and of its neighboring three words.",
"(ii) The sentence and token distance between the two events.",
"(iii) The appearance of any modal verb between the two event mentions in text (i.e., will, would, can, could, may and might).",
"(iv) The appearance of any temporal connectives between the two event mentions (e.g., before, after and since).",
"(v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998) .",
"(vi) Whether the input event mentions have a common derivational form derived from WordNet.",
"(vii) The head words of the preposition phrases that cover each event, respectively.",
"And (viii) event properties such as Aspect, Modality, and Polarity that come with the TimeBank dataset and are commonly used as features.",
"The proposed baseline system uses the averaged perceptron algorithm to classify the relation between each event pair into one of the four relation types.",
"We adopted the same train/dev/test split of TB-Dense, where there are 22 documents in train, 5 in dev, and 9 in test.",
"Parameters were tuned on the train-set to maximize its F 1 on the dev-set, after which the classifier was retrained on the union of train and dev.",
"A detailed analysis of the baseline system is provided in Table 6 .",
"The performance on equal and vague is lower than on before and after, probably due to shortage in these labels in the training data and the inherent difficulty in event coreference and temporal vagueness.",
"We can see, though, that the overall performance on MATRES is much better than those in the literature for TempRel extraction, which used to be in the low 50's Ning et al., 2017 ).",
"The same system was also retrained and tested on the original annotations of TB-Dense (Line \"Original\"), which confirms the significant improvement if the proposed annotation scheme is used.",
"Note that we do not mean to say that the proposed baseline system itself is better than other existing algorithms, but rather that the proposed annotation scheme and the resulting dataset lead to better defined machine learning tasks.",
"In the future, more data can be collected and used with advanced techniques such as ILP (Do et al., 2012) , structured learning (Ning et al., 2017) or multi-sieve .",
"Table 6 : Performance of the proposed baseline system on MATRES.",
"Line \"Original\" is the same system retrained on the original TB-Dense and tested on the same subset of event pairs.",
"Due to the limited number of equal examples, the system did not make any equal predictions on the testset.",
"Conclusion This paper proposes a new scheme for TempRel annotation between events, simplifying the task by focusing on a single time axis at a time.",
"We have also identified that end-points of events is a major source of confusion during annotation due to reasons beyond the scope of TempRel annotation, and proposed to focus on start-points only and handle the end-points issue in further investigation (e.g., in event duration annotation tasks).",
"Pilot study by expert annotators shows significant IAA improvements compared to literature values, indicating a better task definition under the proposed scheme.",
"This further enables the usage of crowdsourcing to collect a new dataset, MATRES, at a lower time cost.",
"Analysis shows that MATRES, albeit crowdsourced, has achieved a reasonably good agreement level, as confirmed by its performance on the gold set (agreement with the authors), the WAWA metric (agreement with the crowdsourcers themselves), and consistency with TB-Dense (agreement with an existing dataset).",
"Given the fact that existing schemes suffer from low IAAs and lack of data, we hope that the findings in this work would provide a good start towards understanding more sophisticated semantic phenomena in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"2.3",
"2.3.1",
"2.3.2",
"2.3.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"5",
"5.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Temporal Structure of Events",
"Motivation",
"Multi-Axis Modeling",
"Comparisons with Existing Work",
"Axis Projection",
"Introduction of the Orthogonal Axes",
"Differences from Factuality",
"Interval Splitting",
"Ambiguity of End-Points",
"Annotation Scheme Design",
"Quality Control for Crowdsourcing",
"Vague Relations",
"Corpus Statistics and Quality",
"Comparison to TB-Dense",
"Baseline System",
"Conclusion"
]
} | GEM-SciDuet-train-123#paper-1336#slide-16 | Conclusion | We proposed to re-think the important tasks of identifying
temporal relations, resulting in a new annotation scheme it.
Multi-axis modeling: a balance between general graphs and chains
Identified that end-point is a major source of confusion
Showed that the new scheme is well-defined even for non-experts and crowdsourcing can be used.
The proposed scheme significantly improves the inter-annotator
The resulting dataset defines an easier machine learning task.
We hope that this work can be a good start for further investigation in this important area. | We proposed to re-think the important tasks of identifying
temporal relations, resulting in a new annotation scheme it.
Multi-axis modeling: a balance between general graphs and chains
Identified that end-point is a major source of confusion
Showed that the new scheme is well-defined even for non-experts and crowdsourcing can be used.
The proposed scheme significantly improves the inter-annotator
The resulting dataset defines an easier machine learning task.
We hope that this work can be a good start for further investigation in this important area. | [] |
GEM-SciDuet-train-124#paper-1342#slide-0 | 1342 | Document Modeling with External Attention for Sentence Extraction | Document modeling is essential to a variety of natural language understanding tasks. We propose to use external information to improve document modeling for problems that can be framed as sentence extraction. We develop a framework composed of a hierarchical document encoder and an attention-based extractor with attention over external information. We evaluate our model on extractive document summarization (where the external information is image captions and the title of the document) and answer selection (where the external information is a question). We show that our model consistently outperforms strong baselines, in terms of both informativeness and fluency (for CNN document summarization) and achieves state-of-the-art results for answer selection on WikiQA and NewsQA. 1 * The first three authors made equal contributions to this paper. The work was done when the second author was visiting Edinburgh. 1 Our TensorFlow code and datasets are publicly available at https://github.com/shashiongithub/ Document-Models-with-Ext-Information. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233
],
"paper_content_text": [
"Introduction Recurrent neural networks have become one of the most widely used models in natural language processing (NLP) .",
"A number of variants of RNNs such as Long Short-Term Memory networks (LSTM; Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit networks (GRU; Cho et al., 2014) have been designed to model text capturing long-term dependencies in problems such as language modeling.",
"However, document modeling, a key to many natural language understanding tasks, is still an open challenge.",
"Recently, some neural network architectures were proposed to capture large context for modeling text (Mikolov and Zweig, 2012; Ghosh et al., 2016; Ji et al., 2015; Wang and Cho, 2016) .",
"Lin et al.",
"(2015) and Yang et al.",
"(2016) proposed a hierarchical RNN network for document-level modeling as well as sentence-level modeling, at the cost of increased computational complexity.",
"Tran et al.",
"(2016) further proposed a contextual language model that considers information at interdocument level.",
"It is challenging to rely only on the document for its understanding, and as such it is not surprising that these models struggle on problems such as document summarization (Cheng and Lapata, 2016; Chen et al., 2016; Nallapati et al., 2017; See et al., 2017; Tan and Wan, 2017) and machine reading comprehension (Trischler et al., 2016; Miller et al., 2016; Weissenborn et al., 2017; Hu et al., 2017; .",
"In this paper, we formalize the use of external information to further guide document modeling for end goals.",
"We present a simple yet effective document modeling framework for sentence extraction that allows machine reading with \"external attention.\"",
"Our model includes a neural hierarchical document encoder (or a machine reader) and a hierarchical attention-based sentence extractor.",
"Our hierarchical document encoder resembles the architectures proposed by Cheng and Lapata (2016) and Narayan et al.",
"(2018) in that it derives the document meaning representation from its sentences and their constituent words.",
"Our novel sentence extractor combines this document meaning representation with an attention mechanism (Bahdanau et al., 2015) over the external information to label sentences from the input document.",
"Our model explicitly biases the extractor with external cues and implicitly biases the encoder through training.",
"We demonstrate the effectiveness of our model on two problems that can be naturally framed as sentence extraction with external information.",
"These two problems, extractive document summarization and answer selection for machine reading comprehension, both require local and global contextual reasoning about a given document.",
"Extractive document summarization systems aim at creating a summary by identifying (and subsequently concatenating) the most important sentences in a document, whereas answer selection systems select the candidate sentence in a document most likely to contain the answer to a query.",
"For document summarization, we exploit the title and image captions which often appear with documents (specifically newswire articles) as external information.",
"For answer selection, we use word overlap features, such as the inverse sentence frequency (ISF, Trischler et al., 2016) and the inverse document frequency (IDF) together with the query, all formulated as external cues.",
"Our main contributions are three-fold: First, our model ensures that sentence extraction is done in a larger (rich) context, i.e., the full document is read first before we start labeling its sentences for extraction, and each sentence labeling is done by implicitly estimating its local and global relevance to the document and by directly attending to some external information for importance cues.",
"Second, while external information has been shown to be useful for summarization systems using traditional hand-crafted features (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) , our model is the first to exploit such information in deep learning-based summarization.",
"We evaluate our models automatically (in terms of ROUGE scores) on the CNN news highlights dataset (Hermann et al., 2015) .",
"Experimental results show that our summarizer, informed with title and image captions, consistently outperforms summarizers that do not use this information.",
"We also conduct a human evaluation to judge which type of summary participants prefer.",
"Our results overwhelmingly show that human subjects find our summaries more informative and complete.",
"Lastly, with the machine reading capabilities of our model, we confirm that a full document needs to be \"read\" to produce high quality extracts allowing a rich contextual reasoning, in contrast to previous answer selection approaches that often measure a score between each sentence in the document and the question and return the sentence with highest score in an isolated manner (Yin et al., 2016; .",
"Our model with ISF and IDF scores as external features achieves competitive results for answer selection.",
"Our ensemble model combining scores from our model and word overlap scores using a logistic regression layer achieves state-ofthe-art results on the popular question answering datasets WikiQA and NewsQA (Trischler et al., 2016) , and it obtains comparable results to the state of the art for SQuAD (Rajpurkar et al., 2016) .",
"We also evaluate our approach on the MSMarco dataset (Nguyen et al., 2016) and elaborate on the behavior of our machine reader in a scenario where each candidate answer sentence is contextually independent of each other.",
"Document Modeling For Sentence Extraction Given a document D consisting of a sequence of n sentences (s 1 , s 2 , ..., s n ) , we aim at labeling each sentence s i in D with a label y i ∈ {0, 1} where y i = 1 indicates that s i is extraction-worthy and 0 otherwise.",
"Our architecture resembles those previously proposed in the literature (Cheng and Lapata, 2016; Nallapati et al., 2017) .",
"The main components include a sentence encoder, a document encoder, and a novel sentence extractor (see Figure 1) that we describe in more detail below.",
"The novel characteristics of our model are that each sentence is labeled by implicitly estimating its (local and global) relevance to the document and by directly attending to some external information for importance cues.",
"Sentence Encoder A core component of our model is a convolutional sentence encoder (Kim, 2014; Kim et al., 2016) which encodes sentences into continuous representations.",
"We use temporal narrow convolution by applying a kernel filter K of width h to a window of h words in sentence s to produce a new feature.",
"This filter is applied to each possible window of words in s to produce a feature map f ∈ R k−h+1 where k is the sentence length.",
"We then apply max-pooling over time over the feature map f and take the maximum value as the feature corresponding to this particular filter K. We use multiple kernels of various sizes and each kernel multiple times to construct the representation of a sentence.",
"In Figure 1 , ker-nels of size 2 (red) and 4 (blue) are applied three times each.",
"The max-pooling over time operation yields two feature lists f K 2 and f K 4 ∈ R 3 .",
"The final sentence embeddings have six dimensions.",
"Document Encoder The document encoder composes a sequence of sentences to obtain a document representation.",
"We use a recurrent neural network with LSTM cells to avoid the vanishing gradient problem when training long sequences (Hochreiter and Schmidhuber, 1997) .",
"Given a document D consisting of a sequence of sentences (s 1 , s 2 , .",
".",
".",
", s n ), we follow common practice and feed the sentences in reverse order (Sutskever et al., 2014; Filippova et al., 2015) .",
"Sentence Extractor Our sentence extractor sequentially labels each sentence in a document with 1 or 0 by implicitly estimating its relevance in the document and by directly attending to the external information for importance cues.",
"It is implemented with another RNN with LSTM cells with an attention mechanism (Bahdanau et al., 2015) and a softmax layer.",
"Our attention mechanism differs from the standard practice of attending intermediate states of the input (encoder).",
"Instead, our extractor attends to a sequence of p pieces of external information E : (e 1 , e 2 , ..., e p ) relevant for the task (e.g., e i is a title or an image caption for summarization) for cues.",
"At time t i , it reads sentence s i and makes a binary prediction, conditioned on the document representation (obtained from the document encoder), the previously labeled sentences and the external information.",
"This way, our labeler is able to identify locally and globally important sentences within the document which correlate well with the external information.",
"Given sentence s t at time step t, it returns a probability distribution over labels as: p(y t |s t , D, E) = softmax(g(h t , h t )) (1) g(h t , h t ) = U o (V h h t + W h h t ) (2) h t = LSTM(s t , h t−1 ) h t = p i=1 α (t,i) e i , where α (t,i) = exp(h t e i ) j exp(h t e j ) where g(·) is a single-layer neural network with parameters U o , V h and W h .",
"h t is an intermedi- Figure 1 : Hierarchical encoder-decoder model for sentence extraction with external attention.",
"s 1 , .",
".",
".",
", s 5 are sentences in the document and, e 1 , e 2 and e 3 represent external information.",
"For the extractive summarization task, e i s are external information such as title and image captions.",
"For the answers selection task, e i s are the query and word overlap features.",
"ate RNN state at time step t. The dynamic context vector h t is essentially the weighted sum of the external information (e 1 , e 2 , .",
".",
".",
", e p ).",
"Figure 1 summarizes our model.",
"Sentence Extraction Applications We validate our model on two sentence extraction problems: extractive document summarization and answer selection for machine reading comprehension.",
"Both these tasks require local and global contextual reasoning about a given document.",
"As such, they test the ability of our model to facilitate document modeling using external information.",
"Extractive Summarization An extractive summarizer aims to produce a summary S by selecting m sentences from D (where m < n).",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i should be included in the summary) by assigning score p(y i |s i , D, E , θ) quantifying the relevance of s i to the summary.",
"We assemble a summary S by selecting m sentences with top p(y i = 1|s i , D, E , θ) scores.",
"We formulate external information E as the sequence of the title and the image captions associated with the document.",
"We use the convolutional sentence encoder to get their sentence-level representations.",
"Answer Selection Given a question q and a document D , the goal of the task is to select one candidate sentence s i ∈ D in which the answer exists.",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i contains the answer) and assign score p(y i |s i , D, E , θ) quantifying s i 's relevance to the query.",
"We return as answer the sentence s i with the highest p(y i = 1|s i , D, E , θ) score.",
"We treat the question q as external information and use the convolutional sentence encoder to get its sentence-level representation.",
"This simplifies Eq.",
"(1) and (2) as follow: p(y t |s t , D, q) = softmax(g(h t , q)) (3) g(h t , q) = U o (V h h t + W q q), where V h and W q are network parameters.",
"We exploit the simplicity of our model to further assimilate external features relevant for answer selection: the inverse sentence frequency (ISF, (Trischler et al., 2016) ), the inverse document frequency (IDF) and a modified version of the ISF score which we call local ISF.",
"Trischler et al.",
"(2016) have shown that a simple ISF baseline (i.e., a sentence with the highest ISF score as an answer) correlates well with the answers.",
"The ISF score α s i for the sentence s i is computed as α s i = w∈s i ∩q IDF(w), where IDF is the inverse document frequency score of word w, defined as: IDF(w) = log N Nw , where N is the total number of sentences in the training set and N w is the number of sentences in which w appears.",
"Note that, s i ∩ q refers to the set of words that appear both in s i and in q.",
"Local ISF is calculated in the same manner as the ISF score, only with setting the total number of sentences (N ) to the number of sentences in the article that is being analyzed.",
"More formally, this modifies Eq.",
"(3) as follows: p(y t |s t , D, q) = softmax(g(h t , q, α t , β t , γ t )), (4) where α t , β t and γ t are the ISF, IDF and local ISF scores (real values) of sentence s t respectively .",
"The function g is calculated as follows: g(h t , q, α t , β t , γ t ) =U o (V h h t + W q q + W isf (α t · 1)+ W idf (β t · 1) + W lisf (γ t · 1) , where W isf , W idf and W lisf are new parameters added to the network and 1 is a vector of 1s of size equal to the sentence embedding size.",
"In Figure 1 , these external feature vectors are represented as 6-dimensional gray vectors accompanied with dashed arrows.",
"Experiments and Results This section presents our experimental setup and results assessing our model in both the extractive summarization and answer selection setups.",
"In the rest of the paper, we refer to our model as XNET for its ability to exploit eXternal information to improve document representation.",
"Extractive Document Summarization Summarization Dataset We evaluated our models on the CNN news highlights dataset (Hermann et al., 2015) .",
"2 We used the standard splits of Hermann et al.",
"(2015) for training, validation, and testing (90,266/1,220/1,093 documents).",
"We followed previous studies (Cheng and Lapata, 2016; Nallapati et al., 2016 Nallapati et al., , 2017 See et al., 2017; Tan and Wan, 2017) in assuming that the \"story highlights\" associated with each article are gold-standard abstractive summaries.",
"We trained our network on a named-entity-anonymized version of news articles.",
"However, we generated deanonymized summaries and evaluated them against gold summaries to facilitate human evaluation and to make human evaluation comparable to automatic evaluation.",
"To train our model, we need documents annotated with sentence extraction information, i.e., each sentence in a document is labeled with 1 (summary-worthy) or 0 (not summary-worthy).",
"We followed Nallapati et al.",
"(2017) and automatically extracted ground truth labels such that all positively labeled sentences from an article collectively give the highest ROUGE (Lin and Hovy, 2003) score with respect to the gold summary.",
"We used a modified script of Hermann et al.",
"(2015) to extract titles and image captions, and we associated them with the corresponding articles.",
"All articles get associated with their titles.",
"The availability of image captions varies from 0 to 414 per article, with an average of 3 image captions.",
"There are 40% CNN articles with at least one image caption.",
"All sentences, including titles and image captions, were padded with zeros to a sentence length of 100.",
"All input documents were padded with zeros to a maximum document length of 126.",
"For each document, we consider a maximum of 10 image captions.",
"We experimented with various numbers (1, 3, 5, 10 and 20) of image captions on the validation set and found that our model performed best with 10 image captions.",
"We refer the reader to the supplementary material for more implementation details to replicate our results.",
"Comparison Systems We compared the output of our model against the standard baseline of simply selecting the first three sentences from each document as the summary.",
"We refer to this baseline as LEAD in the rest of the paper.",
"We also compared our system against the sentence extraction system of Cheng and Lapata (2016) .",
"We refer to this system as POINTERNET as the neural attention architecture in Cheng and Lapata (2016) resembles the one of Pointer Networks .",
"3 It does not exploit any external information.",
"4 The architecture of POINTERNET is closely related to our model without external information.",
"4 Adding external information to POINTERNET is an in- Automatic Evaluation To automatically assess the quality of our summaries, we used ROUGE (Lin and Hovy, 2003) , a recall-oriented metric, to compare our model-generated summaries to manually-written highlights.",
"6 Previous work has reported ROUGE-1 (R1) and ROUGE-2 (R2) scores to access informativeness, and ROUGE-L (RL) to access fluency.",
"In addition to R1, R2 and RL, we also report ROUGE-3 (R3) and ROUGE-4 (R4) capturing higher order n-grams overlap to assess informativeness and fluency simultaneously.",
"teresting direction of research but we do not pursue it here.",
"It requires decoding with multiple types of attentions and this is not the focus of this paper.",
"5 We are unable to compare our results to the extractive system of Nallapati et al.",
"(2017) because they report their results on the DailyMail dataset and their code is not available.",
"The abstractive systems of Chen et al.",
"(2016) and Tan and Wan (2017) report their results on the CNN dataset, however, their results are not comparable to ours as they report on the full-length F1 variants of ROUGE to evaluate their abstractive summaries.",
"We report ROUGE recall scores which is more appropriate to evaluate our extractive summaries.",
"6 We used pyrouge, a Python package, to compute all our ROUGE scores with parameters \"-a -c 95 -m -n 4 -w 1.2.\"",
"We report our results on both full length (three sentences with the top scores as the summary) and fixed length (first 75 bytes and 275 bytes as the summary) summaries.",
"For full length summaries, our decision of selecting three sentences is guided by the fact that there are 3.11 sentences on average in the gold highlights of the training set.",
"We conduct our ablation study on the validation set with full length ROUGE scores, but we report both fixed and full length ROUGE scores for the test set.",
"We experimented with two types of external information: title (TITLE) and image captions (CAPTION).",
"In addition, we experimented with the first sentence (FS) of the document as external information.",
"Note that the latter is not external information, it is a sentence in the document.",
"However, we wanted to explore the idea that the first sentence of the document plays a crucial part in generating summaries (Rush et al., 2015; Nallapati et al., 2016) .",
"XNET with FS acts as a baseline for XNET with title and image captions.",
"We report the performance of several variants of XNET on the validation set in Table 1 .",
"We also compare them against the LEAD baseline and POINTERNET.",
"These two systems do not use any additional information.",
"Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET.",
"When the title (TITLE), image captions (CAPTION) and the first sentence (FS) are used separately as additional information, XNET performs best with TITLE as its external information.",
"Our result demonstrates the importance of the title of the document in extractive summarization (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) .",
"The performance with TITLE and CAP-TION is better than that with FS.",
"We also tried possible combinations of TITLE, CAPTION and FS.",
"All XNET models are superior to the ones without any external information.",
"XNET performs best when TITLE and CAPTION are jointly used as external information (55.4%, 21.8%, 11.8%, 7.5%, and 49.2% for R1, R2, R3, R4, and RL respectively).",
"It is better than the the LEAD baseline by 3.7 points on average and than POINTERNET by 1.8 points on average, indicating that external information is useful to identify the gist of the document.",
"We use this model for testing purposes.",
"Our final results on the test set are shown in to XNET.",
"This result could be because LEAD (always) and POINTERNET (often) include the first sentence in their summaries, whereas, XNET is better capable at selecting sentences from various document positions.",
"This is not captured by smaller summaries of 75 bytes, but it becomes more evident with longer summaries (275 bytes and full length) where XNET performs best across all ROUGE scores.",
"We note that POINTERNET outperforms LEAD for 75-byte summaries, then its performance drops behind LEAD for 275-byte summaries, but then it outperforms LEAD for full length summaries on the metrics R1, R2 and RL.",
"It shows that POINTERNET with its attention over sentences in the document is capable of exploring more than first few sentences in the document, but it is still behind XNET which is better at identifying salient sentences in the document.",
"XNET performs significantly better than POINTERNET by 0.8 points for 275-byte summaries and by 1.9 points for full length summaries, on average for all ROUGE scores.",
"Human Evaluation We complement our automatic evaluation results with human evaluation.",
"We randomly selected 20 articles from the test set.",
"Annotators were presented with a news article and summaries from four different systems.",
"These include the LEAD baseline, POINTERNET, XNET and the human authored highlights.",
"We followed the guidelines in Cheng and Lapata (2016) , and asked our participants to rank the summaries from best (1st) to worst (4th) in order of informativeness (does the summary capture important information in the article?)",
"and fluency (is the summary written in well-formed English?).",
"We did not allow any ties and we only sampled articles with nonidentical summaries.",
"We assigned this task to five annotators who were proficient English speakers.",
"Each annotator was presented with all 20 articles.",
"The order of summaries to rank was randomized per article.",
"An example of summaries our subjects ranked is provided in the supplementary material.",
"The results of our human evaluation study are shown in Table 3 .",
"As one might imagine, HUMAN gets ranked 1st most of the time (41%).",
"However, it is closely followed by XNET which ranked 1st 28% of the time.",
"In comparison, POINTER-NET and LEAD were mostly ranked at 3rd and 4th places.",
"We also carried out pairwise comparisons between all models in Table 3 for their statistical significance using a one-way ANOVA with post-hoc Tukey HSD tests with (p < 0.01).",
"It showed that XNET is significantly better than LEAD and POINTERNET, and it does not differ significantly from HUMAN.",
"On the other hand, POINTERNET does not differ significantly from LEAD and it differs significantly from both XNET and HUMAN.",
"The human evaluation results corroborates our empirical results in Table 1 and Table 2: XNET is better than LEAD and POINT-ERNET in producing informative and fluent summaries.",
"Answer Selection Question Answering Datasets We run experiments on four datasets collected for open domain question-answering tasks: WikiQA , SQuAD (Rajpurkar et al., 2016) , NewsQA (Trischler et al., 2016) , and MSMarco (Nguyen et al., 2016) .",
"NewsQA was especially designed to present lexical and syntactic divergence between questions and answers.",
"It contains 119,633 questions posed by crowdworkers on 12,744 CNN articles previously collected by Hermann et al.",
"(2015) .",
"In a similar manner, SQuAD associates 100,000+ question with a Wikipedia article's first paragraph, for 500+ previously chosen articles.",
"WikiQA was collected by mining web-searching query logs and then associating them with the summary section of the Wikipedia article presumed to be related to the topic of the query.",
"A similar collection procedure was followed to create MSMarco with the difference that each candidate answer is a whole paragraph from a different browsed website associated with the query.",
"We follow the widely used setup of leaving out unanswered questions (Trischler et al., 2016; and adapt the format of each dataset to our task of answer sentence selection by labeling a candidate sentence with 1 if any answer span is contained in that sentence.",
"In the case of MS-Marco, each candidate paragraph comes associated with a label, hence we treat each one as a single long sentence.",
"Since SQuAD keeps the official test dataset hidden and MSMarco does not provide labels for its released test set, we report results on their official validation sets.",
"For validation, we set apart 10% of each official training set.",
"Our dataset splits consist of 92, 525, 5, 165 and 5, 124 samples for NewsQA; 79, 032, 8, 567, and 10, 570 for SQuAD; 873, 122, and 79, 704, 9, 706, and 9, 650 for MSMarco, for training, validation, and testing respectively.",
"Comparison Systems We compared the output of our model against the ISF (Trischler et al., 2016) and LOCALISF baselines.",
"Given an article, the sentence with the highest ISF score is selected as an answer for the ISF baseline and the sentence with the highest local ISF score for the LOCALISF baseline.",
"We also compare our model against a neural network (PAIRCNN) that encodes (question, candidate) in an isolated manner as in previous work (Yin et al., 2016; .",
"The architecture uses the sentence encoder explained in earlier sections to learn the question and candidate representations.",
"The distribution over labels is given by p(y t |q) = p(y t |s t , q) = softmax(g(s t , q)) where g(s t , q) = ReLU(W sq · [s t ; q] + b sq ).",
"In addition, we also compare our model against AP-CNN (dos Santos et al., 2016) , ABCNN (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"We experiment with several variants of our model.",
"XNET is the vanilla version of our sen- SQuAD WikiQA NewsQA MSMarco ACC MAP MRR ACC MAP MRR ACC MAP MRR ACC MAP MRR WRD CNT 77.84 27.50 27.77 51.05 48.91 49.24 44.67 46.48 46.91 20.16 19.37 19.51 WGT WRD CNT 78.43 28.10 28.38 49.79 50.99 51.32 45.24 48.20 48.64 20.50 20.06 20.23 AP-CNN ----68.86 69.57 ------ABCNN ----69.21 71.08 ------L.D.C ----70.58 72.26 ------KV-MemNN ----70.69 72.65 ------LOCALISF 79.50 27.78 28.05 49.79 49.57 50.11 44.69 48.40 46.48 20.21 20.22 20.39 ISF 78.85 28.09 28.36 48.52 46.53 46.72 45.61 48.57 48.99 20.52 20.07 20.23 PAIRCNN 32.53 46.34 46.35 32.49 39.87 38.71 (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"(WGT) WRD CNT stands for the (weighted) word count baseline.",
"See text for more details.",
"tence extractor conditioned only on the query q as external information (Eq.",
"(3) ).",
"XNET+ is an extension of XNET which uses ISF, IDF and local ISF scores in addition to the query q as external information (Eqn.",
"(4) ).",
"We also experimented with a baseline XNETTOPK where we choose the top k sentences with highest ISF score, and then among them choose the one with the highest probability according to XNET.",
"In our experiments, we set k = 5.",
"In the end, we experimented with an ensemble network LRXNET which combines the XNET score, the COMPAGGR score and other word-overlap-based scores (tweaked and optimized for each dataset separately) for each sentence using a logistic regression classifier.",
"It uses ISF and LocalISF scores for NewsQA, IDF and ISF scores for SQuAD, sentence length, IDF and ISF scores for WikiQA, and word overlap and ISF score for MSMarco.",
"We refer the reader to the supplementary material for more implementation and optimization details to replicate our results.",
"Evaluation Metrics We consider metrics that evaluate systems that return a ranked list of candidate answers: mean average precision (MAP), mean reciprocal rank (MRR), and accuracy (ACC).",
"Results Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco.",
"Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate in isolation.",
"Secondly, we can observe that ISF is indeed a strong baseline that outperforms XNET.",
"This means that just \"reading\" the document using a vanilla version of XNET is not sufficient, and help is required through a coarse filtering.",
"Indeed, we observe that XNET+ outperforms all baselines except for COMPAGGR.",
"Our ensemble model LRXNET can ultimately surpass COMPAGGR on majority of the datasets.",
"This consistent behavior validates the machine reading capabilities and the improved document representation with external features of our model for answer selection.",
"Specifically, the combination of document reading and word overlap features is required to be done in a soft manner, using a classification technique.",
"Using it as a hard constraint, with XNETTOPK, does not achieve the best result.",
"We believe that often the ISF score is a better indicator of answer presence in the vicinity of certain candidate instead of in the candidate itself.",
"As such, XNET+ is capable of using this feature in datasets with richer context.",
"It is worth noting that the improvement gained by LRXNET over the state-of-the-art follows a pattern.",
"For the SQuAD dataset, the results are comparable (less than 1%).",
"However, the improvement for WikiQA reaches ∼3% and then the gap shrinks again for NewsQA, with an improvement of ∼1%.",
"This could be explained by the fact that each sample of the SQuAD is a paragraph, compared to an article summary for WikiQA, and to an entire article for NewsQA.",
"Hence, we further strengthen our hypothesis that a richer context is needed to achieve better results, in this case expressed as document length, but as the length of the context increases the limitation of sequential models to learn from long rich sequences arises.",
"7 Interestingly, our model lags behind COM-PAGGR on the MSMarco dataset.",
"It turns out this is due to contextual independence between candidates in the MSMarco dataset, i.e., each candidate is a stand-alone paragraph in this dataset, in contrast to contextually dependent candidate sentences from a document in the NewsQA, SQuAD and WikiQA datasets.",
"As a result, our models (XNET+ and LRXNET) with document reading abilities perform poorly.",
"This can be observed by the fact that XNET and PAIRCNN obtain comparable results.",
"COMPAGGR performs better because comparing each candidate independently is a better strategy.",
"Conclusion We describe an approach to model documents while incorporating external information that informs the representations learned for the sentences in the document.",
"We implement our approach through an attention mechanism of a neural network architecture for modeling documents.",
"Our experiments with extractive document summarization and answer selection tasks validates our model in two ways: first, we demonstrate that external information is important to guide document modeling for natural language understanding tasks.",
"Our model uses image captions and the title of the document for document summarization, and the query with word overlap features for answer selection and outperforms its counterparts that do not use this information.",
"Second, our external attention mechanism successfully guides the learning of the document representation for the relevant end goal.",
"For answer selection, we show that inserting the query with word overlap features using our external attention mechanism outperforms state-of-the-art systems that naturally also have access to this information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Document Modeling For Sentence Extraction",
"Sentence Extraction Applications",
"Experiments and Results",
"Extractive Document Summarization",
"Answer Selection",
"Conclusion"
]
} | GEM-SciDuet-train-124#paper-1342#slide-0 | Sentence extraction | document modeling is essential to many NLP tasks
apparent in problems where capturing long range
from documents, with an end goal in mind
Extractive Summarization and Question Answer Selection
Seoul (CNN) -- South Korea's Prime Minister Lee Wan-koo offered to resign on Monday amid a growing political scandal.
Lee will stay in his official role until South Korean President
Park Geun-hye accepts his resignation.
Park heard about the resignation ...
Calls for Lee to resign began after South Korean tycoon Sung Woan-jong was found hanging from a tree in Seoul.
Woan-jong was found hanging ... both tasks require
Sung, who was under investigation for fraud and bribery left a note listing names and amounts of cash given to top officials, including those who work for the President. deep understanding of
the document Lee and seven other politicians ...
Question: Who resigned over the scandal?
Answer: South Korea's Prime
Minister Lee Wan Koo
local and global contextual reasoning | document modeling is essential to many NLP tasks
apparent in problems where capturing long range
from documents, with an end goal in mind
Extractive Summarization and Question Answer Selection
Seoul (CNN) -- South Korea's Prime Minister Lee Wan-koo offered to resign on Monday amid a growing political scandal.
Lee will stay in his official role until South Korean President
Park Geun-hye accepts his resignation.
Park heard about the resignation ...
Calls for Lee to resign began after South Korean tycoon Sung Woan-jong was found hanging from a tree in Seoul.
Woan-jong was found hanging ... both tasks require
Sung, who was under investigation for fraud and bribery left a note listing names and amounts of cash given to top officials, including those who work for the President. deep understanding of
the document Lee and seven other politicians ...
Question: Who resigned over the scandal?
Answer: South Korea's Prime
Minister Lee Wan Koo
local and global contextual reasoning | [] |
GEM-SciDuet-train-124#paper-1342#slide-1 | 1342 | Document Modeling with External Attention for Sentence Extraction | Document modeling is essential to a variety of natural language understanding tasks. We propose to use external information to improve document modeling for problems that can be framed as sentence extraction. We develop a framework composed of a hierarchical document encoder and an attention-based extractor with attention over external information. We evaluate our model on extractive document summarization (where the external information is image captions and the title of the document) and answer selection (where the external information is a question). We show that our model consistently outperforms strong baselines, in terms of both informativeness and fluency (for CNN document summarization) and achieves state-of-the-art results for answer selection on WikiQA and NewsQA. 1 * The first three authors made equal contributions to this paper. The work was done when the second author was visiting Edinburgh. 1 Our TensorFlow code and datasets are publicly available at https://github.com/shashiongithub/ Document-Models-with-Ext-Information. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233
],
"paper_content_text": [
"Introduction Recurrent neural networks have become one of the most widely used models in natural language processing (NLP) .",
"A number of variants of RNNs such as Long Short-Term Memory networks (LSTM; Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit networks (GRU; Cho et al., 2014) have been designed to model text capturing long-term dependencies in problems such as language modeling.",
"However, document modeling, a key to many natural language understanding tasks, is still an open challenge.",
"Recently, some neural network architectures were proposed to capture large context for modeling text (Mikolov and Zweig, 2012; Ghosh et al., 2016; Ji et al., 2015; Wang and Cho, 2016) .",
"Lin et al.",
"(2015) and Yang et al.",
"(2016) proposed a hierarchical RNN network for document-level modeling as well as sentence-level modeling, at the cost of increased computational complexity.",
"Tran et al.",
"(2016) further proposed a contextual language model that considers information at interdocument level.",
"It is challenging to rely only on the document for its understanding, and as such it is not surprising that these models struggle on problems such as document summarization (Cheng and Lapata, 2016; Chen et al., 2016; Nallapati et al., 2017; See et al., 2017; Tan and Wan, 2017) and machine reading comprehension (Trischler et al., 2016; Miller et al., 2016; Weissenborn et al., 2017; Hu et al., 2017; .",
"In this paper, we formalize the use of external information to further guide document modeling for end goals.",
"We present a simple yet effective document modeling framework for sentence extraction that allows machine reading with \"external attention.\"",
"Our model includes a neural hierarchical document encoder (or a machine reader) and a hierarchical attention-based sentence extractor.",
"Our hierarchical document encoder resembles the architectures proposed by Cheng and Lapata (2016) and Narayan et al.",
"(2018) in that it derives the document meaning representation from its sentences and their constituent words.",
"Our novel sentence extractor combines this document meaning representation with an attention mechanism (Bahdanau et al., 2015) over the external information to label sentences from the input document.",
"Our model explicitly biases the extractor with external cues and implicitly biases the encoder through training.",
"We demonstrate the effectiveness of our model on two problems that can be naturally framed as sentence extraction with external information.",
"These two problems, extractive document summarization and answer selection for machine reading comprehension, both require local and global contextual reasoning about a given document.",
"Extractive document summarization systems aim at creating a summary by identifying (and subsequently concatenating) the most important sentences in a document, whereas answer selection systems select the candidate sentence in a document most likely to contain the answer to a query.",
"For document summarization, we exploit the title and image captions which often appear with documents (specifically newswire articles) as external information.",
"For answer selection, we use word overlap features, such as the inverse sentence frequency (ISF, Trischler et al., 2016) and the inverse document frequency (IDF) together with the query, all formulated as external cues.",
"Our main contributions are three-fold: First, our model ensures that sentence extraction is done in a larger (rich) context, i.e., the full document is read first before we start labeling its sentences for extraction, and each sentence labeling is done by implicitly estimating its local and global relevance to the document and by directly attending to some external information for importance cues.",
"Second, while external information has been shown to be useful for summarization systems using traditional hand-crafted features (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) , our model is the first to exploit such information in deep learning-based summarization.",
"We evaluate our models automatically (in terms of ROUGE scores) on the CNN news highlights dataset (Hermann et al., 2015) .",
"Experimental results show that our summarizer, informed with title and image captions, consistently outperforms summarizers that do not use this information.",
"We also conduct a human evaluation to judge which type of summary participants prefer.",
"Our results overwhelmingly show that human subjects find our summaries more informative and complete.",
"Lastly, with the machine reading capabilities of our model, we confirm that a full document needs to be \"read\" to produce high quality extracts allowing a rich contextual reasoning, in contrast to previous answer selection approaches that often measure a score between each sentence in the document and the question and return the sentence with highest score in an isolated manner (Yin et al., 2016; .",
"Our model with ISF and IDF scores as external features achieves competitive results for answer selection.",
"Our ensemble model combining scores from our model and word overlap scores using a logistic regression layer achieves state-ofthe-art results on the popular question answering datasets WikiQA and NewsQA (Trischler et al., 2016) , and it obtains comparable results to the state of the art for SQuAD (Rajpurkar et al., 2016) .",
"We also evaluate our approach on the MSMarco dataset (Nguyen et al., 2016) and elaborate on the behavior of our machine reader in a scenario where each candidate answer sentence is contextually independent of each other.",
"Document Modeling For Sentence Extraction Given a document D consisting of a sequence of n sentences (s 1 , s 2 , ..., s n ) , we aim at labeling each sentence s i in D with a label y i ∈ {0, 1} where y i = 1 indicates that s i is extraction-worthy and 0 otherwise.",
"Our architecture resembles those previously proposed in the literature (Cheng and Lapata, 2016; Nallapati et al., 2017) .",
"The main components include a sentence encoder, a document encoder, and a novel sentence extractor (see Figure 1) that we describe in more detail below.",
"The novel characteristics of our model are that each sentence is labeled by implicitly estimating its (local and global) relevance to the document and by directly attending to some external information for importance cues.",
"Sentence Encoder A core component of our model is a convolutional sentence encoder (Kim, 2014; Kim et al., 2016) which encodes sentences into continuous representations.",
"We use temporal narrow convolution by applying a kernel filter K of width h to a window of h words in sentence s to produce a new feature.",
"This filter is applied to each possible window of words in s to produce a feature map f ∈ R k−h+1 where k is the sentence length.",
"We then apply max-pooling over time over the feature map f and take the maximum value as the feature corresponding to this particular filter K. We use multiple kernels of various sizes and each kernel multiple times to construct the representation of a sentence.",
"In Figure 1 , ker-nels of size 2 (red) and 4 (blue) are applied three times each.",
"The max-pooling over time operation yields two feature lists f K 2 and f K 4 ∈ R 3 .",
"The final sentence embeddings have six dimensions.",
"Document Encoder The document encoder composes a sequence of sentences to obtain a document representation.",
"We use a recurrent neural network with LSTM cells to avoid the vanishing gradient problem when training long sequences (Hochreiter and Schmidhuber, 1997) .",
"Given a document D consisting of a sequence of sentences (s 1 , s 2 , .",
".",
".",
", s n ), we follow common practice and feed the sentences in reverse order (Sutskever et al., 2014; Filippova et al., 2015) .",
"Sentence Extractor Our sentence extractor sequentially labels each sentence in a document with 1 or 0 by implicitly estimating its relevance in the document and by directly attending to the external information for importance cues.",
"It is implemented with another RNN with LSTM cells with an attention mechanism (Bahdanau et al., 2015) and a softmax layer.",
"Our attention mechanism differs from the standard practice of attending intermediate states of the input (encoder).",
"Instead, our extractor attends to a sequence of p pieces of external information E : (e 1 , e 2 , ..., e p ) relevant for the task (e.g., e i is a title or an image caption for summarization) for cues.",
"At time t i , it reads sentence s i and makes a binary prediction, conditioned on the document representation (obtained from the document encoder), the previously labeled sentences and the external information.",
"This way, our labeler is able to identify locally and globally important sentences within the document which correlate well with the external information.",
"Given sentence s t at time step t, it returns a probability distribution over labels as: p(y t |s t , D, E) = softmax(g(h t , h t )) (1) g(h t , h t ) = U o (V h h t + W h h t ) (2) h t = LSTM(s t , h t−1 ) h t = p i=1 α (t,i) e i , where α (t,i) = exp(h t e i ) j exp(h t e j ) where g(·) is a single-layer neural network with parameters U o , V h and W h .",
"h t is an intermedi- Figure 1 : Hierarchical encoder-decoder model for sentence extraction with external attention.",
"s 1 , .",
".",
".",
", s 5 are sentences in the document and, e 1 , e 2 and e 3 represent external information.",
"For the extractive summarization task, e i s are external information such as title and image captions.",
"For the answers selection task, e i s are the query and word overlap features.",
"ate RNN state at time step t. The dynamic context vector h t is essentially the weighted sum of the external information (e 1 , e 2 , .",
".",
".",
", e p ).",
"Figure 1 summarizes our model.",
"Sentence Extraction Applications We validate our model on two sentence extraction problems: extractive document summarization and answer selection for machine reading comprehension.",
"Both these tasks require local and global contextual reasoning about a given document.",
"As such, they test the ability of our model to facilitate document modeling using external information.",
"Extractive Summarization An extractive summarizer aims to produce a summary S by selecting m sentences from D (where m < n).",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i should be included in the summary) by assigning score p(y i |s i , D, E , θ) quantifying the relevance of s i to the summary.",
"We assemble a summary S by selecting m sentences with top p(y i = 1|s i , D, E , θ) scores.",
"We formulate external information E as the sequence of the title and the image captions associated with the document.",
"We use the convolutional sentence encoder to get their sentence-level representations.",
"Answer Selection Given a question q and a document D , the goal of the task is to select one candidate sentence s i ∈ D in which the answer exists.",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i contains the answer) and assign score p(y i |s i , D, E , θ) quantifying s i 's relevance to the query.",
"We return as answer the sentence s i with the highest p(y i = 1|s i , D, E , θ) score.",
"We treat the question q as external information and use the convolutional sentence encoder to get its sentence-level representation.",
"This simplifies Eq.",
"(1) and (2) as follow: p(y t |s t , D, q) = softmax(g(h t , q)) (3) g(h t , q) = U o (V h h t + W q q), where V h and W q are network parameters.",
"We exploit the simplicity of our model to further assimilate external features relevant for answer selection: the inverse sentence frequency (ISF, (Trischler et al., 2016) ), the inverse document frequency (IDF) and a modified version of the ISF score which we call local ISF.",
"Trischler et al.",
"(2016) have shown that a simple ISF baseline (i.e., a sentence with the highest ISF score as an answer) correlates well with the answers.",
"The ISF score α s i for the sentence s i is computed as α s i = w∈s i ∩q IDF(w), where IDF is the inverse document frequency score of word w, defined as: IDF(w) = log N Nw , where N is the total number of sentences in the training set and N w is the number of sentences in which w appears.",
"Note that, s i ∩ q refers to the set of words that appear both in s i and in q.",
"Local ISF is calculated in the same manner as the ISF score, only with setting the total number of sentences (N ) to the number of sentences in the article that is being analyzed.",
"More formally, this modifies Eq.",
"(3) as follows: p(y t |s t , D, q) = softmax(g(h t , q, α t , β t , γ t )), (4) where α t , β t and γ t are the ISF, IDF and local ISF scores (real values) of sentence s t respectively .",
"The function g is calculated as follows: g(h t , q, α t , β t , γ t ) =U o (V h h t + W q q + W isf (α t · 1)+ W idf (β t · 1) + W lisf (γ t · 1) , where W isf , W idf and W lisf are new parameters added to the network and 1 is a vector of 1s of size equal to the sentence embedding size.",
"In Figure 1 , these external feature vectors are represented as 6-dimensional gray vectors accompanied with dashed arrows.",
"Experiments and Results This section presents our experimental setup and results assessing our model in both the extractive summarization and answer selection setups.",
"In the rest of the paper, we refer to our model as XNET for its ability to exploit eXternal information to improve document representation.",
"Extractive Document Summarization Summarization Dataset We evaluated our models on the CNN news highlights dataset (Hermann et al., 2015) .",
"2 We used the standard splits of Hermann et al.",
"(2015) for training, validation, and testing (90,266/1,220/1,093 documents).",
"We followed previous studies (Cheng and Lapata, 2016; Nallapati et al., 2016 Nallapati et al., , 2017 See et al., 2017; Tan and Wan, 2017) in assuming that the \"story highlights\" associated with each article are gold-standard abstractive summaries.",
"We trained our network on a named-entity-anonymized version of news articles.",
"However, we generated deanonymized summaries and evaluated them against gold summaries to facilitate human evaluation and to make human evaluation comparable to automatic evaluation.",
"To train our model, we need documents annotated with sentence extraction information, i.e., each sentence in a document is labeled with 1 (summary-worthy) or 0 (not summary-worthy).",
"We followed Nallapati et al.",
"(2017) and automatically extracted ground truth labels such that all positively labeled sentences from an article collectively give the highest ROUGE (Lin and Hovy, 2003) score with respect to the gold summary.",
"We used a modified script of Hermann et al.",
"(2015) to extract titles and image captions, and we associated them with the corresponding articles.",
"All articles get associated with their titles.",
"The availability of image captions varies from 0 to 414 per article, with an average of 3 image captions.",
"There are 40% CNN articles with at least one image caption.",
"All sentences, including titles and image captions, were padded with zeros to a sentence length of 100.",
"All input documents were padded with zeros to a maximum document length of 126.",
"For each document, we consider a maximum of 10 image captions.",
"We experimented with various numbers (1, 3, 5, 10 and 20) of image captions on the validation set and found that our model performed best with 10 image captions.",
"We refer the reader to the supplementary material for more implementation details to replicate our results.",
"Comparison Systems We compared the output of our model against the standard baseline of simply selecting the first three sentences from each document as the summary.",
"We refer to this baseline as LEAD in the rest of the paper.",
"We also compared our system against the sentence extraction system of Cheng and Lapata (2016) .",
"We refer to this system as POINTERNET as the neural attention architecture in Cheng and Lapata (2016) resembles the one of Pointer Networks .",
"3 It does not exploit any external information.",
"4 The architecture of POINTERNET is closely related to our model without external information.",
"4 Adding external information to POINTERNET is an in- Automatic Evaluation To automatically assess the quality of our summaries, we used ROUGE (Lin and Hovy, 2003) , a recall-oriented metric, to compare our model-generated summaries to manually-written highlights.",
"6 Previous work has reported ROUGE-1 (R1) and ROUGE-2 (R2) scores to access informativeness, and ROUGE-L (RL) to access fluency.",
"In addition to R1, R2 and RL, we also report ROUGE-3 (R3) and ROUGE-4 (R4) capturing higher order n-grams overlap to assess informativeness and fluency simultaneously.",
"teresting direction of research but we do not pursue it here.",
"It requires decoding with multiple types of attentions and this is not the focus of this paper.",
"5 We are unable to compare our results to the extractive system of Nallapati et al.",
"(2017) because they report their results on the DailyMail dataset and their code is not available.",
"The abstractive systems of Chen et al.",
"(2016) and Tan and Wan (2017) report their results on the CNN dataset, however, their results are not comparable to ours as they report on the full-length F1 variants of ROUGE to evaluate their abstractive summaries.",
"We report ROUGE recall scores which is more appropriate to evaluate our extractive summaries.",
"6 We used pyrouge, a Python package, to compute all our ROUGE scores with parameters \"-a -c 95 -m -n 4 -w 1.2.\"",
"We report our results on both full length (three sentences with the top scores as the summary) and fixed length (first 75 bytes and 275 bytes as the summary) summaries.",
"For full length summaries, our decision of selecting three sentences is guided by the fact that there are 3.11 sentences on average in the gold highlights of the training set.",
"We conduct our ablation study on the validation set with full length ROUGE scores, but we report both fixed and full length ROUGE scores for the test set.",
"We experimented with two types of external information: title (TITLE) and image captions (CAPTION).",
"In addition, we experimented with the first sentence (FS) of the document as external information.",
"Note that the latter is not external information, it is a sentence in the document.",
"However, we wanted to explore the idea that the first sentence of the document plays a crucial part in generating summaries (Rush et al., 2015; Nallapati et al., 2016) .",
"XNET with FS acts as a baseline for XNET with title and image captions.",
"We report the performance of several variants of XNET on the validation set in Table 1 .",
"We also compare them against the LEAD baseline and POINTERNET.",
"These two systems do not use any additional information.",
"Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET.",
"When the title (TITLE), image captions (CAPTION) and the first sentence (FS) are used separately as additional information, XNET performs best with TITLE as its external information.",
"Our result demonstrates the importance of the title of the document in extractive summarization (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) .",
"The performance with TITLE and CAP-TION is better than that with FS.",
"We also tried possible combinations of TITLE, CAPTION and FS.",
"All XNET models are superior to the ones without any external information.",
"XNET performs best when TITLE and CAPTION are jointly used as external information (55.4%, 21.8%, 11.8%, 7.5%, and 49.2% for R1, R2, R3, R4, and RL respectively).",
"It is better than the the LEAD baseline by 3.7 points on average and than POINTERNET by 1.8 points on average, indicating that external information is useful to identify the gist of the document.",
"We use this model for testing purposes.",
"Our final results on the test set are shown in to XNET.",
"This result could be because LEAD (always) and POINTERNET (often) include the first sentence in their summaries, whereas, XNET is better capable at selecting sentences from various document positions.",
"This is not captured by smaller summaries of 75 bytes, but it becomes more evident with longer summaries (275 bytes and full length) where XNET performs best across all ROUGE scores.",
"We note that POINTERNET outperforms LEAD for 75-byte summaries, then its performance drops behind LEAD for 275-byte summaries, but then it outperforms LEAD for full length summaries on the metrics R1, R2 and RL.",
"It shows that POINTERNET with its attention over sentences in the document is capable of exploring more than first few sentences in the document, but it is still behind XNET which is better at identifying salient sentences in the document.",
"XNET performs significantly better than POINTERNET by 0.8 points for 275-byte summaries and by 1.9 points for full length summaries, on average for all ROUGE scores.",
"Human Evaluation We complement our automatic evaluation results with human evaluation.",
"We randomly selected 20 articles from the test set.",
"Annotators were presented with a news article and summaries from four different systems.",
"These include the LEAD baseline, POINTERNET, XNET and the human authored highlights.",
"We followed the guidelines in Cheng and Lapata (2016) , and asked our participants to rank the summaries from best (1st) to worst (4th) in order of informativeness (does the summary capture important information in the article?)",
"and fluency (is the summary written in well-formed English?).",
"We did not allow any ties and we only sampled articles with nonidentical summaries.",
"We assigned this task to five annotators who were proficient English speakers.",
"Each annotator was presented with all 20 articles.",
"The order of summaries to rank was randomized per article.",
"An example of summaries our subjects ranked is provided in the supplementary material.",
"The results of our human evaluation study are shown in Table 3 .",
"As one might imagine, HUMAN gets ranked 1st most of the time (41%).",
"However, it is closely followed by XNET which ranked 1st 28% of the time.",
"In comparison, POINTER-NET and LEAD were mostly ranked at 3rd and 4th places.",
"We also carried out pairwise comparisons between all models in Table 3 for their statistical significance using a one-way ANOVA with post-hoc Tukey HSD tests with (p < 0.01).",
"It showed that XNET is significantly better than LEAD and POINTERNET, and it does not differ significantly from HUMAN.",
"On the other hand, POINTERNET does not differ significantly from LEAD and it differs significantly from both XNET and HUMAN.",
"The human evaluation results corroborates our empirical results in Table 1 and Table 2: XNET is better than LEAD and POINT-ERNET in producing informative and fluent summaries.",
"Answer Selection Question Answering Datasets We run experiments on four datasets collected for open domain question-answering tasks: WikiQA , SQuAD (Rajpurkar et al., 2016) , NewsQA (Trischler et al., 2016) , and MSMarco (Nguyen et al., 2016) .",
"NewsQA was especially designed to present lexical and syntactic divergence between questions and answers.",
"It contains 119,633 questions posed by crowdworkers on 12,744 CNN articles previously collected by Hermann et al.",
"(2015) .",
"In a similar manner, SQuAD associates 100,000+ question with a Wikipedia article's first paragraph, for 500+ previously chosen articles.",
"WikiQA was collected by mining web-searching query logs and then associating them with the summary section of the Wikipedia article presumed to be related to the topic of the query.",
"A similar collection procedure was followed to create MSMarco with the difference that each candidate answer is a whole paragraph from a different browsed website associated with the query.",
"We follow the widely used setup of leaving out unanswered questions (Trischler et al., 2016; and adapt the format of each dataset to our task of answer sentence selection by labeling a candidate sentence with 1 if any answer span is contained in that sentence.",
"In the case of MS-Marco, each candidate paragraph comes associated with a label, hence we treat each one as a single long sentence.",
"Since SQuAD keeps the official test dataset hidden and MSMarco does not provide labels for its released test set, we report results on their official validation sets.",
"For validation, we set apart 10% of each official training set.",
"Our dataset splits consist of 92, 525, 5, 165 and 5, 124 samples for NewsQA; 79, 032, 8, 567, and 10, 570 for SQuAD; 873, 122, and 79, 704, 9, 706, and 9, 650 for MSMarco, for training, validation, and testing respectively.",
"Comparison Systems We compared the output of our model against the ISF (Trischler et al., 2016) and LOCALISF baselines.",
"Given an article, the sentence with the highest ISF score is selected as an answer for the ISF baseline and the sentence with the highest local ISF score for the LOCALISF baseline.",
"We also compare our model against a neural network (PAIRCNN) that encodes (question, candidate) in an isolated manner as in previous work (Yin et al., 2016; .",
"The architecture uses the sentence encoder explained in earlier sections to learn the question and candidate representations.",
"The distribution over labels is given by p(y t |q) = p(y t |s t , q) = softmax(g(s t , q)) where g(s t , q) = ReLU(W sq · [s t ; q] + b sq ).",
"In addition, we also compare our model against AP-CNN (dos Santos et al., 2016) , ABCNN (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"We experiment with several variants of our model.",
"XNET is the vanilla version of our sen- SQuAD WikiQA NewsQA MSMarco ACC MAP MRR ACC MAP MRR ACC MAP MRR ACC MAP MRR WRD CNT 77.84 27.50 27.77 51.05 48.91 49.24 44.67 46.48 46.91 20.16 19.37 19.51 WGT WRD CNT 78.43 28.10 28.38 49.79 50.99 51.32 45.24 48.20 48.64 20.50 20.06 20.23 AP-CNN ----68.86 69.57 ------ABCNN ----69.21 71.08 ------L.D.C ----70.58 72.26 ------KV-MemNN ----70.69 72.65 ------LOCALISF 79.50 27.78 28.05 49.79 49.57 50.11 44.69 48.40 46.48 20.21 20.22 20.39 ISF 78.85 28.09 28.36 48.52 46.53 46.72 45.61 48.57 48.99 20.52 20.07 20.23 PAIRCNN 32.53 46.34 46.35 32.49 39.87 38.71 (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"(WGT) WRD CNT stands for the (weighted) word count baseline.",
"See text for more details.",
"tence extractor conditioned only on the query q as external information (Eq.",
"(3) ).",
"XNET+ is an extension of XNET which uses ISF, IDF and local ISF scores in addition to the query q as external information (Eqn.",
"(4) ).",
"We also experimented with a baseline XNETTOPK where we choose the top k sentences with highest ISF score, and then among them choose the one with the highest probability according to XNET.",
"In our experiments, we set k = 5.",
"In the end, we experimented with an ensemble network LRXNET which combines the XNET score, the COMPAGGR score and other word-overlap-based scores (tweaked and optimized for each dataset separately) for each sentence using a logistic regression classifier.",
"It uses ISF and LocalISF scores for NewsQA, IDF and ISF scores for SQuAD, sentence length, IDF and ISF scores for WikiQA, and word overlap and ISF score for MSMarco.",
"We refer the reader to the supplementary material for more implementation and optimization details to replicate our results.",
"Evaluation Metrics We consider metrics that evaluate systems that return a ranked list of candidate answers: mean average precision (MAP), mean reciprocal rank (MRR), and accuracy (ACC).",
"Results Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco.",
"Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate in isolation.",
"Secondly, we can observe that ISF is indeed a strong baseline that outperforms XNET.",
"This means that just \"reading\" the document using a vanilla version of XNET is not sufficient, and help is required through a coarse filtering.",
"Indeed, we observe that XNET+ outperforms all baselines except for COMPAGGR.",
"Our ensemble model LRXNET can ultimately surpass COMPAGGR on majority of the datasets.",
"This consistent behavior validates the machine reading capabilities and the improved document representation with external features of our model for answer selection.",
"Specifically, the combination of document reading and word overlap features is required to be done in a soft manner, using a classification technique.",
"Using it as a hard constraint, with XNETTOPK, does not achieve the best result.",
"We believe that often the ISF score is a better indicator of answer presence in the vicinity of certain candidate instead of in the candidate itself.",
"As such, XNET+ is capable of using this feature in datasets with richer context.",
"It is worth noting that the improvement gained by LRXNET over the state-of-the-art follows a pattern.",
"For the SQuAD dataset, the results are comparable (less than 1%).",
"However, the improvement for WikiQA reaches ∼3% and then the gap shrinks again for NewsQA, with an improvement of ∼1%.",
"This could be explained by the fact that each sample of the SQuAD is a paragraph, compared to an article summary for WikiQA, and to an entire article for NewsQA.",
"Hence, we further strengthen our hypothesis that a richer context is needed to achieve better results, in this case expressed as document length, but as the length of the context increases the limitation of sequential models to learn from long rich sequences arises.",
"7 Interestingly, our model lags behind COM-PAGGR on the MSMarco dataset.",
"It turns out this is due to contextual independence between candidates in the MSMarco dataset, i.e., each candidate is a stand-alone paragraph in this dataset, in contrast to contextually dependent candidate sentences from a document in the NewsQA, SQuAD and WikiQA datasets.",
"As a result, our models (XNET+ and LRXNET) with document reading abilities perform poorly.",
"This can be observed by the fact that XNET and PAIRCNN obtain comparable results.",
"COMPAGGR performs better because comparing each candidate independently is a better strategy.",
"Conclusion We describe an approach to model documents while incorporating external information that informs the representations learned for the sentences in the document.",
"We implement our approach through an attention mechanism of a neural network architecture for modeling documents.",
"Our experiments with extractive document summarization and answer selection tasks validates our model in two ways: first, we demonstrate that external information is important to guide document modeling for natural language understanding tasks.",
"Our model uses image captions and the title of the document for document summarization, and the query with word overlap features for answer selection and outperforms its counterparts that do not use this information.",
"Second, our external attention mechanism successfully guides the learning of the document representation for the relevant end goal.",
"For answer selection, we show that inserting the query with word overlap features using our external attention mechanism outperforms state-of-the-art systems that naturally also have access to this information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Document Modeling For Sentence Extraction",
"Sentence Extraction Applications",
"Experiments and Results",
"Extractive Document Summarization",
"Answer Selection",
"Conclusion"
]
} | GEM-SciDuet-train-124#paper-1342#slide-1 | Documents are more than plain text chunks | Seoul (CNN) South Korea's Prime Minister
images Lee Wan-koo offered to resign on Monday
amid a growing political scandal. image captions Lee will stay in his official role until South
Lee will stay in his official role until South
Korean President Park Geun-hye accepts his
resignation. He has transferred his ...
They can also contain over bribery scandal investigation
South Korean PM Suicide note leads to
offers resignation government bribery
which can contain key events | Seoul (CNN) South Korea's Prime Minister
images Lee Wan-koo offered to resign on Monday
amid a growing political scandal. image captions Lee will stay in his official role until South
Lee will stay in his official role until South
Korean President Park Geun-hye accepts his
resignation. He has transferred his ...
They can also contain over bribery scandal investigation
South Korean PM Suicide note leads to
offers resignation government bribery
which can contain key events | [] |
GEM-SciDuet-train-124#paper-1342#slide-2 | 1342 | Document Modeling with External Attention for Sentence Extraction | Document modeling is essential to a variety of natural language understanding tasks. We propose to use external information to improve document modeling for problems that can be framed as sentence extraction. We develop a framework composed of a hierarchical document encoder and an attention-based extractor with attention over external information. We evaluate our model on extractive document summarization (where the external information is image captions and the title of the document) and answer selection (where the external information is a question). We show that our model consistently outperforms strong baselines, in terms of both informativeness and fluency (for CNN document summarization) and achieves state-of-the-art results for answer selection on WikiQA and NewsQA. 1 * The first three authors made equal contributions to this paper. The work was done when the second author was visiting Edinburgh. 1 Our TensorFlow code and datasets are publicly available at https://github.com/shashiongithub/ Document-Models-with-Ext-Information. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233
],
"paper_content_text": [
"Introduction Recurrent neural networks have become one of the most widely used models in natural language processing (NLP) .",
"A number of variants of RNNs such as Long Short-Term Memory networks (LSTM; Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit networks (GRU; Cho et al., 2014) have been designed to model text capturing long-term dependencies in problems such as language modeling.",
"However, document modeling, a key to many natural language understanding tasks, is still an open challenge.",
"Recently, some neural network architectures were proposed to capture large context for modeling text (Mikolov and Zweig, 2012; Ghosh et al., 2016; Ji et al., 2015; Wang and Cho, 2016) .",
"Lin et al.",
"(2015) and Yang et al.",
"(2016) proposed a hierarchical RNN network for document-level modeling as well as sentence-level modeling, at the cost of increased computational complexity.",
"Tran et al.",
"(2016) further proposed a contextual language model that considers information at interdocument level.",
"It is challenging to rely only on the document for its understanding, and as such it is not surprising that these models struggle on problems such as document summarization (Cheng and Lapata, 2016; Chen et al., 2016; Nallapati et al., 2017; See et al., 2017; Tan and Wan, 2017) and machine reading comprehension (Trischler et al., 2016; Miller et al., 2016; Weissenborn et al., 2017; Hu et al., 2017; .",
"In this paper, we formalize the use of external information to further guide document modeling for end goals.",
"We present a simple yet effective document modeling framework for sentence extraction that allows machine reading with \"external attention.\"",
"Our model includes a neural hierarchical document encoder (or a machine reader) and a hierarchical attention-based sentence extractor.",
"Our hierarchical document encoder resembles the architectures proposed by Cheng and Lapata (2016) and Narayan et al.",
"(2018) in that it derives the document meaning representation from its sentences and their constituent words.",
"Our novel sentence extractor combines this document meaning representation with an attention mechanism (Bahdanau et al., 2015) over the external information to label sentences from the input document.",
"Our model explicitly biases the extractor with external cues and implicitly biases the encoder through training.",
"We demonstrate the effectiveness of our model on two problems that can be naturally framed as sentence extraction with external information.",
"These two problems, extractive document summarization and answer selection for machine reading comprehension, both require local and global contextual reasoning about a given document.",
"Extractive document summarization systems aim at creating a summary by identifying (and subsequently concatenating) the most important sentences in a document, whereas answer selection systems select the candidate sentence in a document most likely to contain the answer to a query.",
"For document summarization, we exploit the title and image captions which often appear with documents (specifically newswire articles) as external information.",
"For answer selection, we use word overlap features, such as the inverse sentence frequency (ISF, Trischler et al., 2016) and the inverse document frequency (IDF) together with the query, all formulated as external cues.",
"Our main contributions are three-fold: First, our model ensures that sentence extraction is done in a larger (rich) context, i.e., the full document is read first before we start labeling its sentences for extraction, and each sentence labeling is done by implicitly estimating its local and global relevance to the document and by directly attending to some external information for importance cues.",
"Second, while external information has been shown to be useful for summarization systems using traditional hand-crafted features (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) , our model is the first to exploit such information in deep learning-based summarization.",
"We evaluate our models automatically (in terms of ROUGE scores) on the CNN news highlights dataset (Hermann et al., 2015) .",
"Experimental results show that our summarizer, informed with title and image captions, consistently outperforms summarizers that do not use this information.",
"We also conduct a human evaluation to judge which type of summary participants prefer.",
"Our results overwhelmingly show that human subjects find our summaries more informative and complete.",
"Lastly, with the machine reading capabilities of our model, we confirm that a full document needs to be \"read\" to produce high quality extracts allowing a rich contextual reasoning, in contrast to previous answer selection approaches that often measure a score between each sentence in the document and the question and return the sentence with highest score in an isolated manner (Yin et al., 2016; .",
"Our model with ISF and IDF scores as external features achieves competitive results for answer selection.",
"Our ensemble model combining scores from our model and word overlap scores using a logistic regression layer achieves state-ofthe-art results on the popular question answering datasets WikiQA and NewsQA (Trischler et al., 2016) , and it obtains comparable results to the state of the art for SQuAD (Rajpurkar et al., 2016) .",
"We also evaluate our approach on the MSMarco dataset (Nguyen et al., 2016) and elaborate on the behavior of our machine reader in a scenario where each candidate answer sentence is contextually independent of each other.",
"Document Modeling For Sentence Extraction Given a document D consisting of a sequence of n sentences (s 1 , s 2 , ..., s n ) , we aim at labeling each sentence s i in D with a label y i ∈ {0, 1} where y i = 1 indicates that s i is extraction-worthy and 0 otherwise.",
"Our architecture resembles those previously proposed in the literature (Cheng and Lapata, 2016; Nallapati et al., 2017) .",
"The main components include a sentence encoder, a document encoder, and a novel sentence extractor (see Figure 1) that we describe in more detail below.",
"The novel characteristics of our model are that each sentence is labeled by implicitly estimating its (local and global) relevance to the document and by directly attending to some external information for importance cues.",
"Sentence Encoder A core component of our model is a convolutional sentence encoder (Kim, 2014; Kim et al., 2016) which encodes sentences into continuous representations.",
"We use temporal narrow convolution by applying a kernel filter K of width h to a window of h words in sentence s to produce a new feature.",
"This filter is applied to each possible window of words in s to produce a feature map f ∈ R k−h+1 where k is the sentence length.",
"We then apply max-pooling over time over the feature map f and take the maximum value as the feature corresponding to this particular filter K. We use multiple kernels of various sizes and each kernel multiple times to construct the representation of a sentence.",
"In Figure 1 , ker-nels of size 2 (red) and 4 (blue) are applied three times each.",
"The max-pooling over time operation yields two feature lists f K 2 and f K 4 ∈ R 3 .",
"The final sentence embeddings have six dimensions.",
"Document Encoder The document encoder composes a sequence of sentences to obtain a document representation.",
"We use a recurrent neural network with LSTM cells to avoid the vanishing gradient problem when training long sequences (Hochreiter and Schmidhuber, 1997) .",
"Given a document D consisting of a sequence of sentences (s 1 , s 2 , .",
".",
".",
", s n ), we follow common practice and feed the sentences in reverse order (Sutskever et al., 2014; Filippova et al., 2015) .",
"Sentence Extractor Our sentence extractor sequentially labels each sentence in a document with 1 or 0 by implicitly estimating its relevance in the document and by directly attending to the external information for importance cues.",
"It is implemented with another RNN with LSTM cells with an attention mechanism (Bahdanau et al., 2015) and a softmax layer.",
"Our attention mechanism differs from the standard practice of attending intermediate states of the input (encoder).",
"Instead, our extractor attends to a sequence of p pieces of external information E : (e 1 , e 2 , ..., e p ) relevant for the task (e.g., e i is a title or an image caption for summarization) for cues.",
"At time t i , it reads sentence s i and makes a binary prediction, conditioned on the document representation (obtained from the document encoder), the previously labeled sentences and the external information.",
"This way, our labeler is able to identify locally and globally important sentences within the document which correlate well with the external information.",
"Given sentence s t at time step t, it returns a probability distribution over labels as: p(y t |s t , D, E) = softmax(g(h t , h t )) (1) g(h t , h t ) = U o (V h h t + W h h t ) (2) h t = LSTM(s t , h t−1 ) h t = p i=1 α (t,i) e i , where α (t,i) = exp(h t e i ) j exp(h t e j ) where g(·) is a single-layer neural network with parameters U o , V h and W h .",
"h t is an intermedi- Figure 1 : Hierarchical encoder-decoder model for sentence extraction with external attention.",
"s 1 , .",
".",
".",
", s 5 are sentences in the document and, e 1 , e 2 and e 3 represent external information.",
"For the extractive summarization task, e i s are external information such as title and image captions.",
"For the answers selection task, e i s are the query and word overlap features.",
"ate RNN state at time step t. The dynamic context vector h t is essentially the weighted sum of the external information (e 1 , e 2 , .",
".",
".",
", e p ).",
"Figure 1 summarizes our model.",
"Sentence Extraction Applications We validate our model on two sentence extraction problems: extractive document summarization and answer selection for machine reading comprehension.",
"Both these tasks require local and global contextual reasoning about a given document.",
"As such, they test the ability of our model to facilitate document modeling using external information.",
"Extractive Summarization An extractive summarizer aims to produce a summary S by selecting m sentences from D (where m < n).",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i should be included in the summary) by assigning score p(y i |s i , D, E , θ) quantifying the relevance of s i to the summary.",
"We assemble a summary S by selecting m sentences with top p(y i = 1|s i , D, E , θ) scores.",
"We formulate external information E as the sequence of the title and the image captions associated with the document.",
"We use the convolutional sentence encoder to get their sentence-level representations.",
"Answer Selection Given a question q and a document D , the goal of the task is to select one candidate sentence s i ∈ D in which the answer exists.",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i contains the answer) and assign score p(y i |s i , D, E , θ) quantifying s i 's relevance to the query.",
"We return as answer the sentence s i with the highest p(y i = 1|s i , D, E , θ) score.",
"We treat the question q as external information and use the convolutional sentence encoder to get its sentence-level representation.",
"This simplifies Eq.",
"(1) and (2) as follow: p(y t |s t , D, q) = softmax(g(h t , q)) (3) g(h t , q) = U o (V h h t + W q q), where V h and W q are network parameters.",
"We exploit the simplicity of our model to further assimilate external features relevant for answer selection: the inverse sentence frequency (ISF, (Trischler et al., 2016) ), the inverse document frequency (IDF) and a modified version of the ISF score which we call local ISF.",
"Trischler et al.",
"(2016) have shown that a simple ISF baseline (i.e., a sentence with the highest ISF score as an answer) correlates well with the answers.",
"The ISF score α s i for the sentence s i is computed as α s i = w∈s i ∩q IDF(w), where IDF is the inverse document frequency score of word w, defined as: IDF(w) = log N Nw , where N is the total number of sentences in the training set and N w is the number of sentences in which w appears.",
"Note that, s i ∩ q refers to the set of words that appear both in s i and in q.",
"Local ISF is calculated in the same manner as the ISF score, only with setting the total number of sentences (N ) to the number of sentences in the article that is being analyzed.",
"More formally, this modifies Eq.",
"(3) as follows: p(y t |s t , D, q) = softmax(g(h t , q, α t , β t , γ t )), (4) where α t , β t and γ t are the ISF, IDF and local ISF scores (real values) of sentence s t respectively .",
"The function g is calculated as follows: g(h t , q, α t , β t , γ t ) =U o (V h h t + W q q + W isf (α t · 1)+ W idf (β t · 1) + W lisf (γ t · 1) , where W isf , W idf and W lisf are new parameters added to the network and 1 is a vector of 1s of size equal to the sentence embedding size.",
"In Figure 1 , these external feature vectors are represented as 6-dimensional gray vectors accompanied with dashed arrows.",
"Experiments and Results This section presents our experimental setup and results assessing our model in both the extractive summarization and answer selection setups.",
"In the rest of the paper, we refer to our model as XNET for its ability to exploit eXternal information to improve document representation.",
"Extractive Document Summarization Summarization Dataset We evaluated our models on the CNN news highlights dataset (Hermann et al., 2015) .",
"2 We used the standard splits of Hermann et al.",
"(2015) for training, validation, and testing (90,266/1,220/1,093 documents).",
"We followed previous studies (Cheng and Lapata, 2016; Nallapati et al., 2016 Nallapati et al., , 2017 See et al., 2017; Tan and Wan, 2017) in assuming that the \"story highlights\" associated with each article are gold-standard abstractive summaries.",
"We trained our network on a named-entity-anonymized version of news articles.",
"However, we generated deanonymized summaries and evaluated them against gold summaries to facilitate human evaluation and to make human evaluation comparable to automatic evaluation.",
"To train our model, we need documents annotated with sentence extraction information, i.e., each sentence in a document is labeled with 1 (summary-worthy) or 0 (not summary-worthy).",
"We followed Nallapati et al.",
"(2017) and automatically extracted ground truth labels such that all positively labeled sentences from an article collectively give the highest ROUGE (Lin and Hovy, 2003) score with respect to the gold summary.",
"We used a modified script of Hermann et al.",
"(2015) to extract titles and image captions, and we associated them with the corresponding articles.",
"All articles get associated with their titles.",
"The availability of image captions varies from 0 to 414 per article, with an average of 3 image captions.",
"There are 40% CNN articles with at least one image caption.",
"All sentences, including titles and image captions, were padded with zeros to a sentence length of 100.",
"All input documents were padded with zeros to a maximum document length of 126.",
"For each document, we consider a maximum of 10 image captions.",
"We experimented with various numbers (1, 3, 5, 10 and 20) of image captions on the validation set and found that our model performed best with 10 image captions.",
"We refer the reader to the supplementary material for more implementation details to replicate our results.",
"Comparison Systems We compared the output of our model against the standard baseline of simply selecting the first three sentences from each document as the summary.",
"We refer to this baseline as LEAD in the rest of the paper.",
"We also compared our system against the sentence extraction system of Cheng and Lapata (2016) .",
"We refer to this system as POINTERNET as the neural attention architecture in Cheng and Lapata (2016) resembles the one of Pointer Networks .",
"3 It does not exploit any external information.",
"4 The architecture of POINTERNET is closely related to our model without external information.",
"4 Adding external information to POINTERNET is an in- Automatic Evaluation To automatically assess the quality of our summaries, we used ROUGE (Lin and Hovy, 2003) , a recall-oriented metric, to compare our model-generated summaries to manually-written highlights.",
"6 Previous work has reported ROUGE-1 (R1) and ROUGE-2 (R2) scores to access informativeness, and ROUGE-L (RL) to access fluency.",
"In addition to R1, R2 and RL, we also report ROUGE-3 (R3) and ROUGE-4 (R4) capturing higher order n-grams overlap to assess informativeness and fluency simultaneously.",
"teresting direction of research but we do not pursue it here.",
"It requires decoding with multiple types of attentions and this is not the focus of this paper.",
"5 We are unable to compare our results to the extractive system of Nallapati et al.",
"(2017) because they report their results on the DailyMail dataset and their code is not available.",
"The abstractive systems of Chen et al.",
"(2016) and Tan and Wan (2017) report their results on the CNN dataset, however, their results are not comparable to ours as they report on the full-length F1 variants of ROUGE to evaluate their abstractive summaries.",
"We report ROUGE recall scores which is more appropriate to evaluate our extractive summaries.",
"6 We used pyrouge, a Python package, to compute all our ROUGE scores with parameters \"-a -c 95 -m -n 4 -w 1.2.\"",
"We report our results on both full length (three sentences with the top scores as the summary) and fixed length (first 75 bytes and 275 bytes as the summary) summaries.",
"For full length summaries, our decision of selecting three sentences is guided by the fact that there are 3.11 sentences on average in the gold highlights of the training set.",
"We conduct our ablation study on the validation set with full length ROUGE scores, but we report both fixed and full length ROUGE scores for the test set.",
"We experimented with two types of external information: title (TITLE) and image captions (CAPTION).",
"In addition, we experimented with the first sentence (FS) of the document as external information.",
"Note that the latter is not external information, it is a sentence in the document.",
"However, we wanted to explore the idea that the first sentence of the document plays a crucial part in generating summaries (Rush et al., 2015; Nallapati et al., 2016) .",
"XNET with FS acts as a baseline for XNET with title and image captions.",
"We report the performance of several variants of XNET on the validation set in Table 1 .",
"We also compare them against the LEAD baseline and POINTERNET.",
"These two systems do not use any additional information.",
"Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET.",
"When the title (TITLE), image captions (CAPTION) and the first sentence (FS) are used separately as additional information, XNET performs best with TITLE as its external information.",
"Our result demonstrates the importance of the title of the document in extractive summarization (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) .",
"The performance with TITLE and CAP-TION is better than that with FS.",
"We also tried possible combinations of TITLE, CAPTION and FS.",
"All XNET models are superior to the ones without any external information.",
"XNET performs best when TITLE and CAPTION are jointly used as external information (55.4%, 21.8%, 11.8%, 7.5%, and 49.2% for R1, R2, R3, R4, and RL respectively).",
"It is better than the the LEAD baseline by 3.7 points on average and than POINTERNET by 1.8 points on average, indicating that external information is useful to identify the gist of the document.",
"We use this model for testing purposes.",
"Our final results on the test set are shown in to XNET.",
"This result could be because LEAD (always) and POINTERNET (often) include the first sentence in their summaries, whereas, XNET is better capable at selecting sentences from various document positions.",
"This is not captured by smaller summaries of 75 bytes, but it becomes more evident with longer summaries (275 bytes and full length) where XNET performs best across all ROUGE scores.",
"We note that POINTERNET outperforms LEAD for 75-byte summaries, then its performance drops behind LEAD for 275-byte summaries, but then it outperforms LEAD for full length summaries on the metrics R1, R2 and RL.",
"It shows that POINTERNET with its attention over sentences in the document is capable of exploring more than first few sentences in the document, but it is still behind XNET which is better at identifying salient sentences in the document.",
"XNET performs significantly better than POINTERNET by 0.8 points for 275-byte summaries and by 1.9 points for full length summaries, on average for all ROUGE scores.",
"Human Evaluation We complement our automatic evaluation results with human evaluation.",
"We randomly selected 20 articles from the test set.",
"Annotators were presented with a news article and summaries from four different systems.",
"These include the LEAD baseline, POINTERNET, XNET and the human authored highlights.",
"We followed the guidelines in Cheng and Lapata (2016) , and asked our participants to rank the summaries from best (1st) to worst (4th) in order of informativeness (does the summary capture important information in the article?)",
"and fluency (is the summary written in well-formed English?).",
"We did not allow any ties and we only sampled articles with nonidentical summaries.",
"We assigned this task to five annotators who were proficient English speakers.",
"Each annotator was presented with all 20 articles.",
"The order of summaries to rank was randomized per article.",
"An example of summaries our subjects ranked is provided in the supplementary material.",
"The results of our human evaluation study are shown in Table 3 .",
"As one might imagine, HUMAN gets ranked 1st most of the time (41%).",
"However, it is closely followed by XNET which ranked 1st 28% of the time.",
"In comparison, POINTER-NET and LEAD were mostly ranked at 3rd and 4th places.",
"We also carried out pairwise comparisons between all models in Table 3 for their statistical significance using a one-way ANOVA with post-hoc Tukey HSD tests with (p < 0.01).",
"It showed that XNET is significantly better than LEAD and POINTERNET, and it does not differ significantly from HUMAN.",
"On the other hand, POINTERNET does not differ significantly from LEAD and it differs significantly from both XNET and HUMAN.",
"The human evaluation results corroborates our empirical results in Table 1 and Table 2: XNET is better than LEAD and POINT-ERNET in producing informative and fluent summaries.",
"Answer Selection Question Answering Datasets We run experiments on four datasets collected for open domain question-answering tasks: WikiQA , SQuAD (Rajpurkar et al., 2016) , NewsQA (Trischler et al., 2016) , and MSMarco (Nguyen et al., 2016) .",
"NewsQA was especially designed to present lexical and syntactic divergence between questions and answers.",
"It contains 119,633 questions posed by crowdworkers on 12,744 CNN articles previously collected by Hermann et al.",
"(2015) .",
"In a similar manner, SQuAD associates 100,000+ question with a Wikipedia article's first paragraph, for 500+ previously chosen articles.",
"WikiQA was collected by mining web-searching query logs and then associating them with the summary section of the Wikipedia article presumed to be related to the topic of the query.",
"A similar collection procedure was followed to create MSMarco with the difference that each candidate answer is a whole paragraph from a different browsed website associated with the query.",
"We follow the widely used setup of leaving out unanswered questions (Trischler et al., 2016; and adapt the format of each dataset to our task of answer sentence selection by labeling a candidate sentence with 1 if any answer span is contained in that sentence.",
"In the case of MS-Marco, each candidate paragraph comes associated with a label, hence we treat each one as a single long sentence.",
"Since SQuAD keeps the official test dataset hidden and MSMarco does not provide labels for its released test set, we report results on their official validation sets.",
"For validation, we set apart 10% of each official training set.",
"Our dataset splits consist of 92, 525, 5, 165 and 5, 124 samples for NewsQA; 79, 032, 8, 567, and 10, 570 for SQuAD; 873, 122, and 79, 704, 9, 706, and 9, 650 for MSMarco, for training, validation, and testing respectively.",
"Comparison Systems We compared the output of our model against the ISF (Trischler et al., 2016) and LOCALISF baselines.",
"Given an article, the sentence with the highest ISF score is selected as an answer for the ISF baseline and the sentence with the highest local ISF score for the LOCALISF baseline.",
"We also compare our model against a neural network (PAIRCNN) that encodes (question, candidate) in an isolated manner as in previous work (Yin et al., 2016; .",
"The architecture uses the sentence encoder explained in earlier sections to learn the question and candidate representations.",
"The distribution over labels is given by p(y t |q) = p(y t |s t , q) = softmax(g(s t , q)) where g(s t , q) = ReLU(W sq · [s t ; q] + b sq ).",
"In addition, we also compare our model against AP-CNN (dos Santos et al., 2016) , ABCNN (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"We experiment with several variants of our model.",
"XNET is the vanilla version of our sen- SQuAD WikiQA NewsQA MSMarco ACC MAP MRR ACC MAP MRR ACC MAP MRR ACC MAP MRR WRD CNT 77.84 27.50 27.77 51.05 48.91 49.24 44.67 46.48 46.91 20.16 19.37 19.51 WGT WRD CNT 78.43 28.10 28.38 49.79 50.99 51.32 45.24 48.20 48.64 20.50 20.06 20.23 AP-CNN ----68.86 69.57 ------ABCNN ----69.21 71.08 ------L.D.C ----70.58 72.26 ------KV-MemNN ----70.69 72.65 ------LOCALISF 79.50 27.78 28.05 49.79 49.57 50.11 44.69 48.40 46.48 20.21 20.22 20.39 ISF 78.85 28.09 28.36 48.52 46.53 46.72 45.61 48.57 48.99 20.52 20.07 20.23 PAIRCNN 32.53 46.34 46.35 32.49 39.87 38.71 (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"(WGT) WRD CNT stands for the (weighted) word count baseline.",
"See text for more details.",
"tence extractor conditioned only on the query q as external information (Eq.",
"(3) ).",
"XNET+ is an extension of XNET which uses ISF, IDF and local ISF scores in addition to the query q as external information (Eqn.",
"(4) ).",
"We also experimented with a baseline XNETTOPK where we choose the top k sentences with highest ISF score, and then among them choose the one with the highest probability according to XNET.",
"In our experiments, we set k = 5.",
"In the end, we experimented with an ensemble network LRXNET which combines the XNET score, the COMPAGGR score and other word-overlap-based scores (tweaked and optimized for each dataset separately) for each sentence using a logistic regression classifier.",
"It uses ISF and LocalISF scores for NewsQA, IDF and ISF scores for SQuAD, sentence length, IDF and ISF scores for WikiQA, and word overlap and ISF score for MSMarco.",
"We refer the reader to the supplementary material for more implementation and optimization details to replicate our results.",
"Evaluation Metrics We consider metrics that evaluate systems that return a ranked list of candidate answers: mean average precision (MAP), mean reciprocal rank (MRR), and accuracy (ACC).",
"Results Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco.",
"Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate in isolation.",
"Secondly, we can observe that ISF is indeed a strong baseline that outperforms XNET.",
"This means that just \"reading\" the document using a vanilla version of XNET is not sufficient, and help is required through a coarse filtering.",
"Indeed, we observe that XNET+ outperforms all baselines except for COMPAGGR.",
"Our ensemble model LRXNET can ultimately surpass COMPAGGR on majority of the datasets.",
"This consistent behavior validates the machine reading capabilities and the improved document representation with external features of our model for answer selection.",
"Specifically, the combination of document reading and word overlap features is required to be done in a soft manner, using a classification technique.",
"Using it as a hard constraint, with XNETTOPK, does not achieve the best result.",
"We believe that often the ISF score is a better indicator of answer presence in the vicinity of certain candidate instead of in the candidate itself.",
"As such, XNET+ is capable of using this feature in datasets with richer context.",
"It is worth noting that the improvement gained by LRXNET over the state-of-the-art follows a pattern.",
"For the SQuAD dataset, the results are comparable (less than 1%).",
"However, the improvement for WikiQA reaches ∼3% and then the gap shrinks again for NewsQA, with an improvement of ∼1%.",
"This could be explained by the fact that each sample of the SQuAD is a paragraph, compared to an article summary for WikiQA, and to an entire article for NewsQA.",
"Hence, we further strengthen our hypothesis that a richer context is needed to achieve better results, in this case expressed as document length, but as the length of the context increases the limitation of sequential models to learn from long rich sequences arises.",
"7 Interestingly, our model lags behind COM-PAGGR on the MSMarco dataset.",
"It turns out this is due to contextual independence between candidates in the MSMarco dataset, i.e., each candidate is a stand-alone paragraph in this dataset, in contrast to contextually dependent candidate sentences from a document in the NewsQA, SQuAD and WikiQA datasets.",
"As a result, our models (XNET+ and LRXNET) with document reading abilities perform poorly.",
"This can be observed by the fact that XNET and PAIRCNN obtain comparable results.",
"COMPAGGR performs better because comparing each candidate independently is a better strategy.",
"Conclusion We describe an approach to model documents while incorporating external information that informs the representations learned for the sentences in the document.",
"We implement our approach through an attention mechanism of a neural network architecture for modeling documents.",
"Our experiments with extractive document summarization and answer selection tasks validates our model in two ways: first, we demonstrate that external information is important to guide document modeling for natural language understanding tasks.",
"Our model uses image captions and the title of the document for document summarization, and the query with word overlap features for answer selection and outperforms its counterparts that do not use this information.",
"Second, our external attention mechanism successfully guides the learning of the document representation for the relevant end goal.",
"For answer selection, we show that inserting the query with word overlap features using our external attention mechanism outperforms state-of-the-art systems that naturally also have access to this information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Document Modeling For Sentence Extraction",
"Sentence Extraction Applications",
"Experiments and Results",
"Extractive Document Summarization",
"Answer Selection",
"Conclusion"
]
} | GEM-SciDuet-train-124#paper-1342#slide-2 | Main ideas of this work | external information can be useful to sentence selection
document modeling with richer content: read whole document before starting to extract sentences
do not rely on similarity metrics to extract sentences
neural architecture for sentence extraction (XNet) | external information can be useful to sentence selection
document modeling with richer content: read whole document before starting to extract sentences
do not rely on similarity metrics to extract sentences
neural architecture for sentence extraction (XNet) | [] |
GEM-SciDuet-train-124#paper-1342#slide-3 | 1342 | Document Modeling with External Attention for Sentence Extraction | Document modeling is essential to a variety of natural language understanding tasks. We propose to use external information to improve document modeling for problems that can be framed as sentence extraction. We develop a framework composed of a hierarchical document encoder and an attention-based extractor with attention over external information. We evaluate our model on extractive document summarization (where the external information is image captions and the title of the document) and answer selection (where the external information is a question). We show that our model consistently outperforms strong baselines, in terms of both informativeness and fluency (for CNN document summarization) and achieves state-of-the-art results for answer selection on WikiQA and NewsQA. 1 * The first three authors made equal contributions to this paper. The work was done when the second author was visiting Edinburgh. 1 Our TensorFlow code and datasets are publicly available at https://github.com/shashiongithub/ Document-Models-with-Ext-Information. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233
],
"paper_content_text": [
"Introduction Recurrent neural networks have become one of the most widely used models in natural language processing (NLP) .",
"A number of variants of RNNs such as Long Short-Term Memory networks (LSTM; Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit networks (GRU; Cho et al., 2014) have been designed to model text capturing long-term dependencies in problems such as language modeling.",
"However, document modeling, a key to many natural language understanding tasks, is still an open challenge.",
"Recently, some neural network architectures were proposed to capture large context for modeling text (Mikolov and Zweig, 2012; Ghosh et al., 2016; Ji et al., 2015; Wang and Cho, 2016) .",
"Lin et al.",
"(2015) and Yang et al.",
"(2016) proposed a hierarchical RNN network for document-level modeling as well as sentence-level modeling, at the cost of increased computational complexity.",
"Tran et al.",
"(2016) further proposed a contextual language model that considers information at interdocument level.",
"It is challenging to rely only on the document for its understanding, and as such it is not surprising that these models struggle on problems such as document summarization (Cheng and Lapata, 2016; Chen et al., 2016; Nallapati et al., 2017; See et al., 2017; Tan and Wan, 2017) and machine reading comprehension (Trischler et al., 2016; Miller et al., 2016; Weissenborn et al., 2017; Hu et al., 2017; .",
"In this paper, we formalize the use of external information to further guide document modeling for end goals.",
"We present a simple yet effective document modeling framework for sentence extraction that allows machine reading with \"external attention.\"",
"Our model includes a neural hierarchical document encoder (or a machine reader) and a hierarchical attention-based sentence extractor.",
"Our hierarchical document encoder resembles the architectures proposed by Cheng and Lapata (2016) and Narayan et al.",
"(2018) in that it derives the document meaning representation from its sentences and their constituent words.",
"Our novel sentence extractor combines this document meaning representation with an attention mechanism (Bahdanau et al., 2015) over the external information to label sentences from the input document.",
"Our model explicitly biases the extractor with external cues and implicitly biases the encoder through training.",
"We demonstrate the effectiveness of our model on two problems that can be naturally framed as sentence extraction with external information.",
"These two problems, extractive document summarization and answer selection for machine reading comprehension, both require local and global contextual reasoning about a given document.",
"Extractive document summarization systems aim at creating a summary by identifying (and subsequently concatenating) the most important sentences in a document, whereas answer selection systems select the candidate sentence in a document most likely to contain the answer to a query.",
"For document summarization, we exploit the title and image captions which often appear with documents (specifically newswire articles) as external information.",
"For answer selection, we use word overlap features, such as the inverse sentence frequency (ISF, Trischler et al., 2016) and the inverse document frequency (IDF) together with the query, all formulated as external cues.",
"Our main contributions are three-fold: First, our model ensures that sentence extraction is done in a larger (rich) context, i.e., the full document is read first before we start labeling its sentences for extraction, and each sentence labeling is done by implicitly estimating its local and global relevance to the document and by directly attending to some external information for importance cues.",
"Second, while external information has been shown to be useful for summarization systems using traditional hand-crafted features (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) , our model is the first to exploit such information in deep learning-based summarization.",
"We evaluate our models automatically (in terms of ROUGE scores) on the CNN news highlights dataset (Hermann et al., 2015) .",
"Experimental results show that our summarizer, informed with title and image captions, consistently outperforms summarizers that do not use this information.",
"We also conduct a human evaluation to judge which type of summary participants prefer.",
"Our results overwhelmingly show that human subjects find our summaries more informative and complete.",
"Lastly, with the machine reading capabilities of our model, we confirm that a full document needs to be \"read\" to produce high quality extracts allowing a rich contextual reasoning, in contrast to previous answer selection approaches that often measure a score between each sentence in the document and the question and return the sentence with highest score in an isolated manner (Yin et al., 2016; .",
"Our model with ISF and IDF scores as external features achieves competitive results for answer selection.",
"Our ensemble model combining scores from our model and word overlap scores using a logistic regression layer achieves state-ofthe-art results on the popular question answering datasets WikiQA and NewsQA (Trischler et al., 2016) , and it obtains comparable results to the state of the art for SQuAD (Rajpurkar et al., 2016) .",
"We also evaluate our approach on the MSMarco dataset (Nguyen et al., 2016) and elaborate on the behavior of our machine reader in a scenario where each candidate answer sentence is contextually independent of each other.",
"Document Modeling For Sentence Extraction Given a document D consisting of a sequence of n sentences (s 1 , s 2 , ..., s n ) , we aim at labeling each sentence s i in D with a label y i ∈ {0, 1} where y i = 1 indicates that s i is extraction-worthy and 0 otherwise.",
"Our architecture resembles those previously proposed in the literature (Cheng and Lapata, 2016; Nallapati et al., 2017) .",
"The main components include a sentence encoder, a document encoder, and a novel sentence extractor (see Figure 1) that we describe in more detail below.",
"The novel characteristics of our model are that each sentence is labeled by implicitly estimating its (local and global) relevance to the document and by directly attending to some external information for importance cues.",
"Sentence Encoder A core component of our model is a convolutional sentence encoder (Kim, 2014; Kim et al., 2016) which encodes sentences into continuous representations.",
"We use temporal narrow convolution by applying a kernel filter K of width h to a window of h words in sentence s to produce a new feature.",
"This filter is applied to each possible window of words in s to produce a feature map f ∈ R k−h+1 where k is the sentence length.",
"We then apply max-pooling over time over the feature map f and take the maximum value as the feature corresponding to this particular filter K. We use multiple kernels of various sizes and each kernel multiple times to construct the representation of a sentence.",
"In Figure 1 , ker-nels of size 2 (red) and 4 (blue) are applied three times each.",
"The max-pooling over time operation yields two feature lists f K 2 and f K 4 ∈ R 3 .",
"The final sentence embeddings have six dimensions.",
"Document Encoder The document encoder composes a sequence of sentences to obtain a document representation.",
"We use a recurrent neural network with LSTM cells to avoid the vanishing gradient problem when training long sequences (Hochreiter and Schmidhuber, 1997) .",
"Given a document D consisting of a sequence of sentences (s 1 , s 2 , .",
".",
".",
", s n ), we follow common practice and feed the sentences in reverse order (Sutskever et al., 2014; Filippova et al., 2015) .",
"Sentence Extractor Our sentence extractor sequentially labels each sentence in a document with 1 or 0 by implicitly estimating its relevance in the document and by directly attending to the external information for importance cues.",
"It is implemented with another RNN with LSTM cells with an attention mechanism (Bahdanau et al., 2015) and a softmax layer.",
"Our attention mechanism differs from the standard practice of attending intermediate states of the input (encoder).",
"Instead, our extractor attends to a sequence of p pieces of external information E : (e 1 , e 2 , ..., e p ) relevant for the task (e.g., e i is a title or an image caption for summarization) for cues.",
"At time t i , it reads sentence s i and makes a binary prediction, conditioned on the document representation (obtained from the document encoder), the previously labeled sentences and the external information.",
"This way, our labeler is able to identify locally and globally important sentences within the document which correlate well with the external information.",
"Given sentence s t at time step t, it returns a probability distribution over labels as: p(y t |s t , D, E) = softmax(g(h t , h t )) (1) g(h t , h t ) = U o (V h h t + W h h t ) (2) h t = LSTM(s t , h t−1 ) h t = p i=1 α (t,i) e i , where α (t,i) = exp(h t e i ) j exp(h t e j ) where g(·) is a single-layer neural network with parameters U o , V h and W h .",
"h t is an intermedi- Figure 1 : Hierarchical encoder-decoder model for sentence extraction with external attention.",
"s 1 , .",
".",
".",
", s 5 are sentences in the document and, e 1 , e 2 and e 3 represent external information.",
"For the extractive summarization task, e i s are external information such as title and image captions.",
"For the answers selection task, e i s are the query and word overlap features.",
"ate RNN state at time step t. The dynamic context vector h t is essentially the weighted sum of the external information (e 1 , e 2 , .",
".",
".",
", e p ).",
"Figure 1 summarizes our model.",
"Sentence Extraction Applications We validate our model on two sentence extraction problems: extractive document summarization and answer selection for machine reading comprehension.",
"Both these tasks require local and global contextual reasoning about a given document.",
"As such, they test the ability of our model to facilitate document modeling using external information.",
"Extractive Summarization An extractive summarizer aims to produce a summary S by selecting m sentences from D (where m < n).",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i should be included in the summary) by assigning score p(y i |s i , D, E , θ) quantifying the relevance of s i to the summary.",
"We assemble a summary S by selecting m sentences with top p(y i = 1|s i , D, E , θ) scores.",
"We formulate external information E as the sequence of the title and the image captions associated with the document.",
"We use the convolutional sentence encoder to get their sentence-level representations.",
"Answer Selection Given a question q and a document D , the goal of the task is to select one candidate sentence s i ∈ D in which the answer exists.",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i contains the answer) and assign score p(y i |s i , D, E , θ) quantifying s i 's relevance to the query.",
"We return as answer the sentence s i with the highest p(y i = 1|s i , D, E , θ) score.",
"We treat the question q as external information and use the convolutional sentence encoder to get its sentence-level representation.",
"This simplifies Eq.",
"(1) and (2) as follow: p(y t |s t , D, q) = softmax(g(h t , q)) (3) g(h t , q) = U o (V h h t + W q q), where V h and W q are network parameters.",
"We exploit the simplicity of our model to further assimilate external features relevant for answer selection: the inverse sentence frequency (ISF, (Trischler et al., 2016) ), the inverse document frequency (IDF) and a modified version of the ISF score which we call local ISF.",
"Trischler et al.",
"(2016) have shown that a simple ISF baseline (i.e., a sentence with the highest ISF score as an answer) correlates well with the answers.",
"The ISF score α s i for the sentence s i is computed as α s i = w∈s i ∩q IDF(w), where IDF is the inverse document frequency score of word w, defined as: IDF(w) = log N Nw , where N is the total number of sentences in the training set and N w is the number of sentences in which w appears.",
"Note that, s i ∩ q refers to the set of words that appear both in s i and in q.",
"Local ISF is calculated in the same manner as the ISF score, only with setting the total number of sentences (N ) to the number of sentences in the article that is being analyzed.",
"More formally, this modifies Eq.",
"(3) as follows: p(y t |s t , D, q) = softmax(g(h t , q, α t , β t , γ t )), (4) where α t , β t and γ t are the ISF, IDF and local ISF scores (real values) of sentence s t respectively .",
"The function g is calculated as follows: g(h t , q, α t , β t , γ t ) =U o (V h h t + W q q + W isf (α t · 1)+ W idf (β t · 1) + W lisf (γ t · 1) , where W isf , W idf and W lisf are new parameters added to the network and 1 is a vector of 1s of size equal to the sentence embedding size.",
"In Figure 1 , these external feature vectors are represented as 6-dimensional gray vectors accompanied with dashed arrows.",
"Experiments and Results This section presents our experimental setup and results assessing our model in both the extractive summarization and answer selection setups.",
"In the rest of the paper, we refer to our model as XNET for its ability to exploit eXternal information to improve document representation.",
"Extractive Document Summarization Summarization Dataset We evaluated our models on the CNN news highlights dataset (Hermann et al., 2015) .",
"2 We used the standard splits of Hermann et al.",
"(2015) for training, validation, and testing (90,266/1,220/1,093 documents).",
"We followed previous studies (Cheng and Lapata, 2016; Nallapati et al., 2016 Nallapati et al., , 2017 See et al., 2017; Tan and Wan, 2017) in assuming that the \"story highlights\" associated with each article are gold-standard abstractive summaries.",
"We trained our network on a named-entity-anonymized version of news articles.",
"However, we generated deanonymized summaries and evaluated them against gold summaries to facilitate human evaluation and to make human evaluation comparable to automatic evaluation.",
"To train our model, we need documents annotated with sentence extraction information, i.e., each sentence in a document is labeled with 1 (summary-worthy) or 0 (not summary-worthy).",
"We followed Nallapati et al.",
"(2017) and automatically extracted ground truth labels such that all positively labeled sentences from an article collectively give the highest ROUGE (Lin and Hovy, 2003) score with respect to the gold summary.",
"We used a modified script of Hermann et al.",
"(2015) to extract titles and image captions, and we associated them with the corresponding articles.",
"All articles get associated with their titles.",
"The availability of image captions varies from 0 to 414 per article, with an average of 3 image captions.",
"There are 40% CNN articles with at least one image caption.",
"All sentences, including titles and image captions, were padded with zeros to a sentence length of 100.",
"All input documents were padded with zeros to a maximum document length of 126.",
"For each document, we consider a maximum of 10 image captions.",
"We experimented with various numbers (1, 3, 5, 10 and 20) of image captions on the validation set and found that our model performed best with 10 image captions.",
"We refer the reader to the supplementary material for more implementation details to replicate our results.",
"Comparison Systems We compared the output of our model against the standard baseline of simply selecting the first three sentences from each document as the summary.",
"We refer to this baseline as LEAD in the rest of the paper.",
"We also compared our system against the sentence extraction system of Cheng and Lapata (2016) .",
"We refer to this system as POINTERNET as the neural attention architecture in Cheng and Lapata (2016) resembles the one of Pointer Networks .",
"3 It does not exploit any external information.",
"4 The architecture of POINTERNET is closely related to our model without external information.",
"4 Adding external information to POINTERNET is an in- Automatic Evaluation To automatically assess the quality of our summaries, we used ROUGE (Lin and Hovy, 2003) , a recall-oriented metric, to compare our model-generated summaries to manually-written highlights.",
"6 Previous work has reported ROUGE-1 (R1) and ROUGE-2 (R2) scores to access informativeness, and ROUGE-L (RL) to access fluency.",
"In addition to R1, R2 and RL, we also report ROUGE-3 (R3) and ROUGE-4 (R4) capturing higher order n-grams overlap to assess informativeness and fluency simultaneously.",
"teresting direction of research but we do not pursue it here.",
"It requires decoding with multiple types of attentions and this is not the focus of this paper.",
"5 We are unable to compare our results to the extractive system of Nallapati et al.",
"(2017) because they report their results on the DailyMail dataset and their code is not available.",
"The abstractive systems of Chen et al.",
"(2016) and Tan and Wan (2017) report their results on the CNN dataset, however, their results are not comparable to ours as they report on the full-length F1 variants of ROUGE to evaluate their abstractive summaries.",
"We report ROUGE recall scores which is more appropriate to evaluate our extractive summaries.",
"6 We used pyrouge, a Python package, to compute all our ROUGE scores with parameters \"-a -c 95 -m -n 4 -w 1.2.\"",
"We report our results on both full length (three sentences with the top scores as the summary) and fixed length (first 75 bytes and 275 bytes as the summary) summaries.",
"For full length summaries, our decision of selecting three sentences is guided by the fact that there are 3.11 sentences on average in the gold highlights of the training set.",
"We conduct our ablation study on the validation set with full length ROUGE scores, but we report both fixed and full length ROUGE scores for the test set.",
"We experimented with two types of external information: title (TITLE) and image captions (CAPTION).",
"In addition, we experimented with the first sentence (FS) of the document as external information.",
"Note that the latter is not external information, it is a sentence in the document.",
"However, we wanted to explore the idea that the first sentence of the document plays a crucial part in generating summaries (Rush et al., 2015; Nallapati et al., 2016) .",
"XNET with FS acts as a baseline for XNET with title and image captions.",
"We report the performance of several variants of XNET on the validation set in Table 1 .",
"We also compare them against the LEAD baseline and POINTERNET.",
"These two systems do not use any additional information.",
"Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET.",
"When the title (TITLE), image captions (CAPTION) and the first sentence (FS) are used separately as additional information, XNET performs best with TITLE as its external information.",
"Our result demonstrates the importance of the title of the document in extractive summarization (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) .",
"The performance with TITLE and CAP-TION is better than that with FS.",
"We also tried possible combinations of TITLE, CAPTION and FS.",
"All XNET models are superior to the ones without any external information.",
"XNET performs best when TITLE and CAPTION are jointly used as external information (55.4%, 21.8%, 11.8%, 7.5%, and 49.2% for R1, R2, R3, R4, and RL respectively).",
"It is better than the the LEAD baseline by 3.7 points on average and than POINTERNET by 1.8 points on average, indicating that external information is useful to identify the gist of the document.",
"We use this model for testing purposes.",
"Our final results on the test set are shown in to XNET.",
"This result could be because LEAD (always) and POINTERNET (often) include the first sentence in their summaries, whereas, XNET is better capable at selecting sentences from various document positions.",
"This is not captured by smaller summaries of 75 bytes, but it becomes more evident with longer summaries (275 bytes and full length) where XNET performs best across all ROUGE scores.",
"We note that POINTERNET outperforms LEAD for 75-byte summaries, then its performance drops behind LEAD for 275-byte summaries, but then it outperforms LEAD for full length summaries on the metrics R1, R2 and RL.",
"It shows that POINTERNET with its attention over sentences in the document is capable of exploring more than first few sentences in the document, but it is still behind XNET which is better at identifying salient sentences in the document.",
"XNET performs significantly better than POINTERNET by 0.8 points for 275-byte summaries and by 1.9 points for full length summaries, on average for all ROUGE scores.",
"Human Evaluation We complement our automatic evaluation results with human evaluation.",
"We randomly selected 20 articles from the test set.",
"Annotators were presented with a news article and summaries from four different systems.",
"These include the LEAD baseline, POINTERNET, XNET and the human authored highlights.",
"We followed the guidelines in Cheng and Lapata (2016) , and asked our participants to rank the summaries from best (1st) to worst (4th) in order of informativeness (does the summary capture important information in the article?)",
"and fluency (is the summary written in well-formed English?).",
"We did not allow any ties and we only sampled articles with nonidentical summaries.",
"We assigned this task to five annotators who were proficient English speakers.",
"Each annotator was presented with all 20 articles.",
"The order of summaries to rank was randomized per article.",
"An example of summaries our subjects ranked is provided in the supplementary material.",
"The results of our human evaluation study are shown in Table 3 .",
"As one might imagine, HUMAN gets ranked 1st most of the time (41%).",
"However, it is closely followed by XNET which ranked 1st 28% of the time.",
"In comparison, POINTER-NET and LEAD were mostly ranked at 3rd and 4th places.",
"We also carried out pairwise comparisons between all models in Table 3 for their statistical significance using a one-way ANOVA with post-hoc Tukey HSD tests with (p < 0.01).",
"It showed that XNET is significantly better than LEAD and POINTERNET, and it does not differ significantly from HUMAN.",
"On the other hand, POINTERNET does not differ significantly from LEAD and it differs significantly from both XNET and HUMAN.",
"The human evaluation results corroborates our empirical results in Table 1 and Table 2: XNET is better than LEAD and POINT-ERNET in producing informative and fluent summaries.",
"Answer Selection Question Answering Datasets We run experiments on four datasets collected for open domain question-answering tasks: WikiQA , SQuAD (Rajpurkar et al., 2016) , NewsQA (Trischler et al., 2016) , and MSMarco (Nguyen et al., 2016) .",
"NewsQA was especially designed to present lexical and syntactic divergence between questions and answers.",
"It contains 119,633 questions posed by crowdworkers on 12,744 CNN articles previously collected by Hermann et al.",
"(2015) .",
"In a similar manner, SQuAD associates 100,000+ question with a Wikipedia article's first paragraph, for 500+ previously chosen articles.",
"WikiQA was collected by mining web-searching query logs and then associating them with the summary section of the Wikipedia article presumed to be related to the topic of the query.",
"A similar collection procedure was followed to create MSMarco with the difference that each candidate answer is a whole paragraph from a different browsed website associated with the query.",
"We follow the widely used setup of leaving out unanswered questions (Trischler et al., 2016; and adapt the format of each dataset to our task of answer sentence selection by labeling a candidate sentence with 1 if any answer span is contained in that sentence.",
"In the case of MS-Marco, each candidate paragraph comes associated with a label, hence we treat each one as a single long sentence.",
"Since SQuAD keeps the official test dataset hidden and MSMarco does not provide labels for its released test set, we report results on their official validation sets.",
"For validation, we set apart 10% of each official training set.",
"Our dataset splits consist of 92, 525, 5, 165 and 5, 124 samples for NewsQA; 79, 032, 8, 567, and 10, 570 for SQuAD; 873, 122, and 79, 704, 9, 706, and 9, 650 for MSMarco, for training, validation, and testing respectively.",
"Comparison Systems We compared the output of our model against the ISF (Trischler et al., 2016) and LOCALISF baselines.",
"Given an article, the sentence with the highest ISF score is selected as an answer for the ISF baseline and the sentence with the highest local ISF score for the LOCALISF baseline.",
"We also compare our model against a neural network (PAIRCNN) that encodes (question, candidate) in an isolated manner as in previous work (Yin et al., 2016; .",
"The architecture uses the sentence encoder explained in earlier sections to learn the question and candidate representations.",
"The distribution over labels is given by p(y t |q) = p(y t |s t , q) = softmax(g(s t , q)) where g(s t , q) = ReLU(W sq · [s t ; q] + b sq ).",
"In addition, we also compare our model against AP-CNN (dos Santos et al., 2016) , ABCNN (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"We experiment with several variants of our model.",
"XNET is the vanilla version of our sen- SQuAD WikiQA NewsQA MSMarco ACC MAP MRR ACC MAP MRR ACC MAP MRR ACC MAP MRR WRD CNT 77.84 27.50 27.77 51.05 48.91 49.24 44.67 46.48 46.91 20.16 19.37 19.51 WGT WRD CNT 78.43 28.10 28.38 49.79 50.99 51.32 45.24 48.20 48.64 20.50 20.06 20.23 AP-CNN ----68.86 69.57 ------ABCNN ----69.21 71.08 ------L.D.C ----70.58 72.26 ------KV-MemNN ----70.69 72.65 ------LOCALISF 79.50 27.78 28.05 49.79 49.57 50.11 44.69 48.40 46.48 20.21 20.22 20.39 ISF 78.85 28.09 28.36 48.52 46.53 46.72 45.61 48.57 48.99 20.52 20.07 20.23 PAIRCNN 32.53 46.34 46.35 32.49 39.87 38.71 (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"(WGT) WRD CNT stands for the (weighted) word count baseline.",
"See text for more details.",
"tence extractor conditioned only on the query q as external information (Eq.",
"(3) ).",
"XNET+ is an extension of XNET which uses ISF, IDF and local ISF scores in addition to the query q as external information (Eqn.",
"(4) ).",
"We also experimented with a baseline XNETTOPK where we choose the top k sentences with highest ISF score, and then among them choose the one with the highest probability according to XNET.",
"In our experiments, we set k = 5.",
"In the end, we experimented with an ensemble network LRXNET which combines the XNET score, the COMPAGGR score and other word-overlap-based scores (tweaked and optimized for each dataset separately) for each sentence using a logistic regression classifier.",
"It uses ISF and LocalISF scores for NewsQA, IDF and ISF scores for SQuAD, sentence length, IDF and ISF scores for WikiQA, and word overlap and ISF score for MSMarco.",
"We refer the reader to the supplementary material for more implementation and optimization details to replicate our results.",
"Evaluation Metrics We consider metrics that evaluate systems that return a ranked list of candidate answers: mean average precision (MAP), mean reciprocal rank (MRR), and accuracy (ACC).",
"Results Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco.",
"Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate in isolation.",
"Secondly, we can observe that ISF is indeed a strong baseline that outperforms XNET.",
"This means that just \"reading\" the document using a vanilla version of XNET is not sufficient, and help is required through a coarse filtering.",
"Indeed, we observe that XNET+ outperforms all baselines except for COMPAGGR.",
"Our ensemble model LRXNET can ultimately surpass COMPAGGR on majority of the datasets.",
"This consistent behavior validates the machine reading capabilities and the improved document representation with external features of our model for answer selection.",
"Specifically, the combination of document reading and word overlap features is required to be done in a soft manner, using a classification technique.",
"Using it as a hard constraint, with XNETTOPK, does not achieve the best result.",
"We believe that often the ISF score is a better indicator of answer presence in the vicinity of certain candidate instead of in the candidate itself.",
"As such, XNET+ is capable of using this feature in datasets with richer context.",
"It is worth noting that the improvement gained by LRXNET over the state-of-the-art follows a pattern.",
"For the SQuAD dataset, the results are comparable (less than 1%).",
"However, the improvement for WikiQA reaches ∼3% and then the gap shrinks again for NewsQA, with an improvement of ∼1%.",
"This could be explained by the fact that each sample of the SQuAD is a paragraph, compared to an article summary for WikiQA, and to an entire article for NewsQA.",
"Hence, we further strengthen our hypothesis that a richer context is needed to achieve better results, in this case expressed as document length, but as the length of the context increases the limitation of sequential models to learn from long rich sequences arises.",
"7 Interestingly, our model lags behind COM-PAGGR on the MSMarco dataset.",
"It turns out this is due to contextual independence between candidates in the MSMarco dataset, i.e., each candidate is a stand-alone paragraph in this dataset, in contrast to contextually dependent candidate sentences from a document in the NewsQA, SQuAD and WikiQA datasets.",
"As a result, our models (XNET+ and LRXNET) with document reading abilities perform poorly.",
"This can be observed by the fact that XNET and PAIRCNN obtain comparable results.",
"COMPAGGR performs better because comparing each candidate independently is a better strategy.",
"Conclusion We describe an approach to model documents while incorporating external information that informs the representations learned for the sentences in the document.",
"We implement our approach through an attention mechanism of a neural network architecture for modeling documents.",
"Our experiments with extractive document summarization and answer selection tasks validates our model in two ways: first, we demonstrate that external information is important to guide document modeling for natural language understanding tasks.",
"Our model uses image captions and the title of the document for document summarization, and the query with word overlap features for answer selection and outperforms its counterparts that do not use this information.",
"Second, our external attention mechanism successfully guides the learning of the document representation for the relevant end goal.",
"For answer selection, we show that inserting the query with word overlap features using our external attention mechanism outperforms state-of-the-art systems that naturally also have access to this information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Document Modeling For Sentence Extraction",
"Sentence Extraction Applications",
"Experiments and Results",
"Extractive Document Summarization",
"Answer Selection",
"Conclusion"
]
} | GEM-SciDuet-train-124#paper-1342#slide-3 | XNet Document Encoder | Convolutional Sentence encoder Document encoder
sentences are encoded by a convolutional encoder (Kim, 2014)
get a document embedding before extraction begins | Convolutional Sentence encoder Document encoder
sentences are encoded by a convolutional encoder (Kim, 2014)
get a document embedding before extraction begins | [] |
GEM-SciDuet-train-124#paper-1342#slide-4 | 1342 | Document Modeling with External Attention for Sentence Extraction | Document modeling is essential to a variety of natural language understanding tasks. We propose to use external information to improve document modeling for problems that can be framed as sentence extraction. We develop a framework composed of a hierarchical document encoder and an attention-based extractor with attention over external information. We evaluate our model on extractive document summarization (where the external information is image captions and the title of the document) and answer selection (where the external information is a question). We show that our model consistently outperforms strong baselines, in terms of both informativeness and fluency (for CNN document summarization) and achieves state-of-the-art results for answer selection on WikiQA and NewsQA. 1 * The first three authors made equal contributions to this paper. The work was done when the second author was visiting Edinburgh. 1 Our TensorFlow code and datasets are publicly available at https://github.com/shashiongithub/ Document-Models-with-Ext-Information. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233
],
"paper_content_text": [
"Introduction Recurrent neural networks have become one of the most widely used models in natural language processing (NLP) .",
"A number of variants of RNNs such as Long Short-Term Memory networks (LSTM; Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit networks (GRU; Cho et al., 2014) have been designed to model text capturing long-term dependencies in problems such as language modeling.",
"However, document modeling, a key to many natural language understanding tasks, is still an open challenge.",
"Recently, some neural network architectures were proposed to capture large context for modeling text (Mikolov and Zweig, 2012; Ghosh et al., 2016; Ji et al., 2015; Wang and Cho, 2016) .",
"Lin et al.",
"(2015) and Yang et al.",
"(2016) proposed a hierarchical RNN network for document-level modeling as well as sentence-level modeling, at the cost of increased computational complexity.",
"Tran et al.",
"(2016) further proposed a contextual language model that considers information at interdocument level.",
"It is challenging to rely only on the document for its understanding, and as such it is not surprising that these models struggle on problems such as document summarization (Cheng and Lapata, 2016; Chen et al., 2016; Nallapati et al., 2017; See et al., 2017; Tan and Wan, 2017) and machine reading comprehension (Trischler et al., 2016; Miller et al., 2016; Weissenborn et al., 2017; Hu et al., 2017; .",
"In this paper, we formalize the use of external information to further guide document modeling for end goals.",
"We present a simple yet effective document modeling framework for sentence extraction that allows machine reading with \"external attention.\"",
"Our model includes a neural hierarchical document encoder (or a machine reader) and a hierarchical attention-based sentence extractor.",
"Our hierarchical document encoder resembles the architectures proposed by Cheng and Lapata (2016) and Narayan et al.",
"(2018) in that it derives the document meaning representation from its sentences and their constituent words.",
"Our novel sentence extractor combines this document meaning representation with an attention mechanism (Bahdanau et al., 2015) over the external information to label sentences from the input document.",
"Our model explicitly biases the extractor with external cues and implicitly biases the encoder through training.",
"We demonstrate the effectiveness of our model on two problems that can be naturally framed as sentence extraction with external information.",
"These two problems, extractive document summarization and answer selection for machine reading comprehension, both require local and global contextual reasoning about a given document.",
"Extractive document summarization systems aim at creating a summary by identifying (and subsequently concatenating) the most important sentences in a document, whereas answer selection systems select the candidate sentence in a document most likely to contain the answer to a query.",
"For document summarization, we exploit the title and image captions which often appear with documents (specifically newswire articles) as external information.",
"For answer selection, we use word overlap features, such as the inverse sentence frequency (ISF, Trischler et al., 2016) and the inverse document frequency (IDF) together with the query, all formulated as external cues.",
"Our main contributions are three-fold: First, our model ensures that sentence extraction is done in a larger (rich) context, i.e., the full document is read first before we start labeling its sentences for extraction, and each sentence labeling is done by implicitly estimating its local and global relevance to the document and by directly attending to some external information for importance cues.",
"Second, while external information has been shown to be useful for summarization systems using traditional hand-crafted features (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) , our model is the first to exploit such information in deep learning-based summarization.",
"We evaluate our models automatically (in terms of ROUGE scores) on the CNN news highlights dataset (Hermann et al., 2015) .",
"Experimental results show that our summarizer, informed with title and image captions, consistently outperforms summarizers that do not use this information.",
"We also conduct a human evaluation to judge which type of summary participants prefer.",
"Our results overwhelmingly show that human subjects find our summaries more informative and complete.",
"Lastly, with the machine reading capabilities of our model, we confirm that a full document needs to be \"read\" to produce high quality extracts allowing a rich contextual reasoning, in contrast to previous answer selection approaches that often measure a score between each sentence in the document and the question and return the sentence with highest score in an isolated manner (Yin et al., 2016; .",
"Our model with ISF and IDF scores as external features achieves competitive results for answer selection.",
"Our ensemble model combining scores from our model and word overlap scores using a logistic regression layer achieves state-ofthe-art results on the popular question answering datasets WikiQA and NewsQA (Trischler et al., 2016) , and it obtains comparable results to the state of the art for SQuAD (Rajpurkar et al., 2016) .",
"We also evaluate our approach on the MSMarco dataset (Nguyen et al., 2016) and elaborate on the behavior of our machine reader in a scenario where each candidate answer sentence is contextually independent of each other.",
"Document Modeling For Sentence Extraction Given a document D consisting of a sequence of n sentences (s 1 , s 2 , ..., s n ) , we aim at labeling each sentence s i in D with a label y i ∈ {0, 1} where y i = 1 indicates that s i is extraction-worthy and 0 otherwise.",
"Our architecture resembles those previously proposed in the literature (Cheng and Lapata, 2016; Nallapati et al., 2017) .",
"The main components include a sentence encoder, a document encoder, and a novel sentence extractor (see Figure 1) that we describe in more detail below.",
"The novel characteristics of our model are that each sentence is labeled by implicitly estimating its (local and global) relevance to the document and by directly attending to some external information for importance cues.",
"Sentence Encoder A core component of our model is a convolutional sentence encoder (Kim, 2014; Kim et al., 2016) which encodes sentences into continuous representations.",
"We use temporal narrow convolution by applying a kernel filter K of width h to a window of h words in sentence s to produce a new feature.",
"This filter is applied to each possible window of words in s to produce a feature map f ∈ R k−h+1 where k is the sentence length.",
"We then apply max-pooling over time over the feature map f and take the maximum value as the feature corresponding to this particular filter K. We use multiple kernels of various sizes and each kernel multiple times to construct the representation of a sentence.",
"In Figure 1 , ker-nels of size 2 (red) and 4 (blue) are applied three times each.",
"The max-pooling over time operation yields two feature lists f K 2 and f K 4 ∈ R 3 .",
"The final sentence embeddings have six dimensions.",
"Document Encoder The document encoder composes a sequence of sentences to obtain a document representation.",
"We use a recurrent neural network with LSTM cells to avoid the vanishing gradient problem when training long sequences (Hochreiter and Schmidhuber, 1997) .",
"Given a document D consisting of a sequence of sentences (s 1 , s 2 , .",
".",
".",
", s n ), we follow common practice and feed the sentences in reverse order (Sutskever et al., 2014; Filippova et al., 2015) .",
"Sentence Extractor Our sentence extractor sequentially labels each sentence in a document with 1 or 0 by implicitly estimating its relevance in the document and by directly attending to the external information for importance cues.",
"It is implemented with another RNN with LSTM cells with an attention mechanism (Bahdanau et al., 2015) and a softmax layer.",
"Our attention mechanism differs from the standard practice of attending intermediate states of the input (encoder).",
"Instead, our extractor attends to a sequence of p pieces of external information E : (e 1 , e 2 , ..., e p ) relevant for the task (e.g., e i is a title or an image caption for summarization) for cues.",
"At time t i , it reads sentence s i and makes a binary prediction, conditioned on the document representation (obtained from the document encoder), the previously labeled sentences and the external information.",
"This way, our labeler is able to identify locally and globally important sentences within the document which correlate well with the external information.",
"Given sentence s t at time step t, it returns a probability distribution over labels as: p(y t |s t , D, E) = softmax(g(h t , h t )) (1) g(h t , h t ) = U o (V h h t + W h h t ) (2) h t = LSTM(s t , h t−1 ) h t = p i=1 α (t,i) e i , where α (t,i) = exp(h t e i ) j exp(h t e j ) where g(·) is a single-layer neural network with parameters U o , V h and W h .",
"h t is an intermedi- Figure 1 : Hierarchical encoder-decoder model for sentence extraction with external attention.",
"s 1 , .",
".",
".",
", s 5 are sentences in the document and, e 1 , e 2 and e 3 represent external information.",
"For the extractive summarization task, e i s are external information such as title and image captions.",
"For the answers selection task, e i s are the query and word overlap features.",
"ate RNN state at time step t. The dynamic context vector h t is essentially the weighted sum of the external information (e 1 , e 2 , .",
".",
".",
", e p ).",
"Figure 1 summarizes our model.",
"Sentence Extraction Applications We validate our model on two sentence extraction problems: extractive document summarization and answer selection for machine reading comprehension.",
"Both these tasks require local and global contextual reasoning about a given document.",
"As such, they test the ability of our model to facilitate document modeling using external information.",
"Extractive Summarization An extractive summarizer aims to produce a summary S by selecting m sentences from D (where m < n).",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i should be included in the summary) by assigning score p(y i |s i , D, E , θ) quantifying the relevance of s i to the summary.",
"We assemble a summary S by selecting m sentences with top p(y i = 1|s i , D, E , θ) scores.",
"We formulate external information E as the sequence of the title and the image captions associated with the document.",
"We use the convolutional sentence encoder to get their sentence-level representations.",
"Answer Selection Given a question q and a document D , the goal of the task is to select one candidate sentence s i ∈ D in which the answer exists.",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i contains the answer) and assign score p(y i |s i , D, E , θ) quantifying s i 's relevance to the query.",
"We return as answer the sentence s i with the highest p(y i = 1|s i , D, E , θ) score.",
"We treat the question q as external information and use the convolutional sentence encoder to get its sentence-level representation.",
"This simplifies Eq.",
"(1) and (2) as follow: p(y t |s t , D, q) = softmax(g(h t , q)) (3) g(h t , q) = U o (V h h t + W q q), where V h and W q are network parameters.",
"We exploit the simplicity of our model to further assimilate external features relevant for answer selection: the inverse sentence frequency (ISF, (Trischler et al., 2016) ), the inverse document frequency (IDF) and a modified version of the ISF score which we call local ISF.",
"Trischler et al.",
"(2016) have shown that a simple ISF baseline (i.e., a sentence with the highest ISF score as an answer) correlates well with the answers.",
"The ISF score α s i for the sentence s i is computed as α s i = w∈s i ∩q IDF(w), where IDF is the inverse document frequency score of word w, defined as: IDF(w) = log N Nw , where N is the total number of sentences in the training set and N w is the number of sentences in which w appears.",
"Note that, s i ∩ q refers to the set of words that appear both in s i and in q.",
"Local ISF is calculated in the same manner as the ISF score, only with setting the total number of sentences (N ) to the number of sentences in the article that is being analyzed.",
"More formally, this modifies Eq.",
"(3) as follows: p(y t |s t , D, q) = softmax(g(h t , q, α t , β t , γ t )), (4) where α t , β t and γ t are the ISF, IDF and local ISF scores (real values) of sentence s t respectively .",
"The function g is calculated as follows: g(h t , q, α t , β t , γ t ) =U o (V h h t + W q q + W isf (α t · 1)+ W idf (β t · 1) + W lisf (γ t · 1) , where W isf , W idf and W lisf are new parameters added to the network and 1 is a vector of 1s of size equal to the sentence embedding size.",
"In Figure 1 , these external feature vectors are represented as 6-dimensional gray vectors accompanied with dashed arrows.",
"Experiments and Results This section presents our experimental setup and results assessing our model in both the extractive summarization and answer selection setups.",
"In the rest of the paper, we refer to our model as XNET for its ability to exploit eXternal information to improve document representation.",
"Extractive Document Summarization Summarization Dataset We evaluated our models on the CNN news highlights dataset (Hermann et al., 2015) .",
"2 We used the standard splits of Hermann et al.",
"(2015) for training, validation, and testing (90,266/1,220/1,093 documents).",
"We followed previous studies (Cheng and Lapata, 2016; Nallapati et al., 2016 Nallapati et al., , 2017 See et al., 2017; Tan and Wan, 2017) in assuming that the \"story highlights\" associated with each article are gold-standard abstractive summaries.",
"We trained our network on a named-entity-anonymized version of news articles.",
"However, we generated deanonymized summaries and evaluated them against gold summaries to facilitate human evaluation and to make human evaluation comparable to automatic evaluation.",
"To train our model, we need documents annotated with sentence extraction information, i.e., each sentence in a document is labeled with 1 (summary-worthy) or 0 (not summary-worthy).",
"We followed Nallapati et al.",
"(2017) and automatically extracted ground truth labels such that all positively labeled sentences from an article collectively give the highest ROUGE (Lin and Hovy, 2003) score with respect to the gold summary.",
"We used a modified script of Hermann et al.",
"(2015) to extract titles and image captions, and we associated them with the corresponding articles.",
"All articles get associated with their titles.",
"The availability of image captions varies from 0 to 414 per article, with an average of 3 image captions.",
"There are 40% CNN articles with at least one image caption.",
"All sentences, including titles and image captions, were padded with zeros to a sentence length of 100.",
"All input documents were padded with zeros to a maximum document length of 126.",
"For each document, we consider a maximum of 10 image captions.",
"We experimented with various numbers (1, 3, 5, 10 and 20) of image captions on the validation set and found that our model performed best with 10 image captions.",
"We refer the reader to the supplementary material for more implementation details to replicate our results.",
"Comparison Systems We compared the output of our model against the standard baseline of simply selecting the first three sentences from each document as the summary.",
"We refer to this baseline as LEAD in the rest of the paper.",
"We also compared our system against the sentence extraction system of Cheng and Lapata (2016) .",
"We refer to this system as POINTERNET as the neural attention architecture in Cheng and Lapata (2016) resembles the one of Pointer Networks .",
"3 It does not exploit any external information.",
"4 The architecture of POINTERNET is closely related to our model without external information.",
"4 Adding external information to POINTERNET is an in- Automatic Evaluation To automatically assess the quality of our summaries, we used ROUGE (Lin and Hovy, 2003) , a recall-oriented metric, to compare our model-generated summaries to manually-written highlights.",
"6 Previous work has reported ROUGE-1 (R1) and ROUGE-2 (R2) scores to access informativeness, and ROUGE-L (RL) to access fluency.",
"In addition to R1, R2 and RL, we also report ROUGE-3 (R3) and ROUGE-4 (R4) capturing higher order n-grams overlap to assess informativeness and fluency simultaneously.",
"teresting direction of research but we do not pursue it here.",
"It requires decoding with multiple types of attentions and this is not the focus of this paper.",
"5 We are unable to compare our results to the extractive system of Nallapati et al.",
"(2017) because they report their results on the DailyMail dataset and their code is not available.",
"The abstractive systems of Chen et al.",
"(2016) and Tan and Wan (2017) report their results on the CNN dataset, however, their results are not comparable to ours as they report on the full-length F1 variants of ROUGE to evaluate their abstractive summaries.",
"We report ROUGE recall scores which is more appropriate to evaluate our extractive summaries.",
"6 We used pyrouge, a Python package, to compute all our ROUGE scores with parameters \"-a -c 95 -m -n 4 -w 1.2.\"",
"We report our results on both full length (three sentences with the top scores as the summary) and fixed length (first 75 bytes and 275 bytes as the summary) summaries.",
"For full length summaries, our decision of selecting three sentences is guided by the fact that there are 3.11 sentences on average in the gold highlights of the training set.",
"We conduct our ablation study on the validation set with full length ROUGE scores, but we report both fixed and full length ROUGE scores for the test set.",
"We experimented with two types of external information: title (TITLE) and image captions (CAPTION).",
"In addition, we experimented with the first sentence (FS) of the document as external information.",
"Note that the latter is not external information, it is a sentence in the document.",
"However, we wanted to explore the idea that the first sentence of the document plays a crucial part in generating summaries (Rush et al., 2015; Nallapati et al., 2016) .",
"XNET with FS acts as a baseline for XNET with title and image captions.",
"We report the performance of several variants of XNET on the validation set in Table 1 .",
"We also compare them against the LEAD baseline and POINTERNET.",
"These two systems do not use any additional information.",
"Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET.",
"When the title (TITLE), image captions (CAPTION) and the first sentence (FS) are used separately as additional information, XNET performs best with TITLE as its external information.",
"Our result demonstrates the importance of the title of the document in extractive summarization (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) .",
"The performance with TITLE and CAP-TION is better than that with FS.",
"We also tried possible combinations of TITLE, CAPTION and FS.",
"All XNET models are superior to the ones without any external information.",
"XNET performs best when TITLE and CAPTION are jointly used as external information (55.4%, 21.8%, 11.8%, 7.5%, and 49.2% for R1, R2, R3, R4, and RL respectively).",
"It is better than the the LEAD baseline by 3.7 points on average and than POINTERNET by 1.8 points on average, indicating that external information is useful to identify the gist of the document.",
"We use this model for testing purposes.",
"Our final results on the test set are shown in to XNET.",
"This result could be because LEAD (always) and POINTERNET (often) include the first sentence in their summaries, whereas, XNET is better capable at selecting sentences from various document positions.",
"This is not captured by smaller summaries of 75 bytes, but it becomes more evident with longer summaries (275 bytes and full length) where XNET performs best across all ROUGE scores.",
"We note that POINTERNET outperforms LEAD for 75-byte summaries, then its performance drops behind LEAD for 275-byte summaries, but then it outperforms LEAD for full length summaries on the metrics R1, R2 and RL.",
"It shows that POINTERNET with its attention over sentences in the document is capable of exploring more than first few sentences in the document, but it is still behind XNET which is better at identifying salient sentences in the document.",
"XNET performs significantly better than POINTERNET by 0.8 points for 275-byte summaries and by 1.9 points for full length summaries, on average for all ROUGE scores.",
"Human Evaluation We complement our automatic evaluation results with human evaluation.",
"We randomly selected 20 articles from the test set.",
"Annotators were presented with a news article and summaries from four different systems.",
"These include the LEAD baseline, POINTERNET, XNET and the human authored highlights.",
"We followed the guidelines in Cheng and Lapata (2016) , and asked our participants to rank the summaries from best (1st) to worst (4th) in order of informativeness (does the summary capture important information in the article?)",
"and fluency (is the summary written in well-formed English?).",
"We did not allow any ties and we only sampled articles with nonidentical summaries.",
"We assigned this task to five annotators who were proficient English speakers.",
"Each annotator was presented with all 20 articles.",
"The order of summaries to rank was randomized per article.",
"An example of summaries our subjects ranked is provided in the supplementary material.",
"The results of our human evaluation study are shown in Table 3 .",
"As one might imagine, HUMAN gets ranked 1st most of the time (41%).",
"However, it is closely followed by XNET which ranked 1st 28% of the time.",
"In comparison, POINTER-NET and LEAD were mostly ranked at 3rd and 4th places.",
"We also carried out pairwise comparisons between all models in Table 3 for their statistical significance using a one-way ANOVA with post-hoc Tukey HSD tests with (p < 0.01).",
"It showed that XNET is significantly better than LEAD and POINTERNET, and it does not differ significantly from HUMAN.",
"On the other hand, POINTERNET does not differ significantly from LEAD and it differs significantly from both XNET and HUMAN.",
"The human evaluation results corroborates our empirical results in Table 1 and Table 2: XNET is better than LEAD and POINT-ERNET in producing informative and fluent summaries.",
"Answer Selection Question Answering Datasets We run experiments on four datasets collected for open domain question-answering tasks: WikiQA , SQuAD (Rajpurkar et al., 2016) , NewsQA (Trischler et al., 2016) , and MSMarco (Nguyen et al., 2016) .",
"NewsQA was especially designed to present lexical and syntactic divergence between questions and answers.",
"It contains 119,633 questions posed by crowdworkers on 12,744 CNN articles previously collected by Hermann et al.",
"(2015) .",
"In a similar manner, SQuAD associates 100,000+ question with a Wikipedia article's first paragraph, for 500+ previously chosen articles.",
"WikiQA was collected by mining web-searching query logs and then associating them with the summary section of the Wikipedia article presumed to be related to the topic of the query.",
"A similar collection procedure was followed to create MSMarco with the difference that each candidate answer is a whole paragraph from a different browsed website associated with the query.",
"We follow the widely used setup of leaving out unanswered questions (Trischler et al., 2016; and adapt the format of each dataset to our task of answer sentence selection by labeling a candidate sentence with 1 if any answer span is contained in that sentence.",
"In the case of MS-Marco, each candidate paragraph comes associated with a label, hence we treat each one as a single long sentence.",
"Since SQuAD keeps the official test dataset hidden and MSMarco does not provide labels for its released test set, we report results on their official validation sets.",
"For validation, we set apart 10% of each official training set.",
"Our dataset splits consist of 92, 525, 5, 165 and 5, 124 samples for NewsQA; 79, 032, 8, 567, and 10, 570 for SQuAD; 873, 122, and 79, 704, 9, 706, and 9, 650 for MSMarco, for training, validation, and testing respectively.",
"Comparison Systems We compared the output of our model against the ISF (Trischler et al., 2016) and LOCALISF baselines.",
"Given an article, the sentence with the highest ISF score is selected as an answer for the ISF baseline and the sentence with the highest local ISF score for the LOCALISF baseline.",
"We also compare our model against a neural network (PAIRCNN) that encodes (question, candidate) in an isolated manner as in previous work (Yin et al., 2016; .",
"The architecture uses the sentence encoder explained in earlier sections to learn the question and candidate representations.",
"The distribution over labels is given by p(y t |q) = p(y t |s t , q) = softmax(g(s t , q)) where g(s t , q) = ReLU(W sq · [s t ; q] + b sq ).",
"In addition, we also compare our model against AP-CNN (dos Santos et al., 2016) , ABCNN (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"We experiment with several variants of our model.",
"XNET is the vanilla version of our sen- SQuAD WikiQA NewsQA MSMarco ACC MAP MRR ACC MAP MRR ACC MAP MRR ACC MAP MRR WRD CNT 77.84 27.50 27.77 51.05 48.91 49.24 44.67 46.48 46.91 20.16 19.37 19.51 WGT WRD CNT 78.43 28.10 28.38 49.79 50.99 51.32 45.24 48.20 48.64 20.50 20.06 20.23 AP-CNN ----68.86 69.57 ------ABCNN ----69.21 71.08 ------L.D.C ----70.58 72.26 ------KV-MemNN ----70.69 72.65 ------LOCALISF 79.50 27.78 28.05 49.79 49.57 50.11 44.69 48.40 46.48 20.21 20.22 20.39 ISF 78.85 28.09 28.36 48.52 46.53 46.72 45.61 48.57 48.99 20.52 20.07 20.23 PAIRCNN 32.53 46.34 46.35 32.49 39.87 38.71 (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"(WGT) WRD CNT stands for the (weighted) word count baseline.",
"See text for more details.",
"tence extractor conditioned only on the query q as external information (Eq.",
"(3) ).",
"XNET+ is an extension of XNET which uses ISF, IDF and local ISF scores in addition to the query q as external information (Eqn.",
"(4) ).",
"We also experimented with a baseline XNETTOPK where we choose the top k sentences with highest ISF score, and then among them choose the one with the highest probability according to XNET.",
"In our experiments, we set k = 5.",
"In the end, we experimented with an ensemble network LRXNET which combines the XNET score, the COMPAGGR score and other word-overlap-based scores (tweaked and optimized for each dataset separately) for each sentence using a logistic regression classifier.",
"It uses ISF and LocalISF scores for NewsQA, IDF and ISF scores for SQuAD, sentence length, IDF and ISF scores for WikiQA, and word overlap and ISF score for MSMarco.",
"We refer the reader to the supplementary material for more implementation and optimization details to replicate our results.",
"Evaluation Metrics We consider metrics that evaluate systems that return a ranked list of candidate answers: mean average precision (MAP), mean reciprocal rank (MRR), and accuracy (ACC).",
"Results Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco.",
"Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate in isolation.",
"Secondly, we can observe that ISF is indeed a strong baseline that outperforms XNET.",
"This means that just \"reading\" the document using a vanilla version of XNET is not sufficient, and help is required through a coarse filtering.",
"Indeed, we observe that XNET+ outperforms all baselines except for COMPAGGR.",
"Our ensemble model LRXNET can ultimately surpass COMPAGGR on majority of the datasets.",
"This consistent behavior validates the machine reading capabilities and the improved document representation with external features of our model for answer selection.",
"Specifically, the combination of document reading and word overlap features is required to be done in a soft manner, using a classification technique.",
"Using it as a hard constraint, with XNETTOPK, does not achieve the best result.",
"We believe that often the ISF score is a better indicator of answer presence in the vicinity of certain candidate instead of in the candidate itself.",
"As such, XNET+ is capable of using this feature in datasets with richer context.",
"It is worth noting that the improvement gained by LRXNET over the state-of-the-art follows a pattern.",
"For the SQuAD dataset, the results are comparable (less than 1%).",
"However, the improvement for WikiQA reaches ∼3% and then the gap shrinks again for NewsQA, with an improvement of ∼1%.",
"This could be explained by the fact that each sample of the SQuAD is a paragraph, compared to an article summary for WikiQA, and to an entire article for NewsQA.",
"Hence, we further strengthen our hypothesis that a richer context is needed to achieve better results, in this case expressed as document length, but as the length of the context increases the limitation of sequential models to learn from long rich sequences arises.",
"7 Interestingly, our model lags behind COM-PAGGR on the MSMarco dataset.",
"It turns out this is due to contextual independence between candidates in the MSMarco dataset, i.e., each candidate is a stand-alone paragraph in this dataset, in contrast to contextually dependent candidate sentences from a document in the NewsQA, SQuAD and WikiQA datasets.",
"As a result, our models (XNET+ and LRXNET) with document reading abilities perform poorly.",
"This can be observed by the fact that XNET and PAIRCNN obtain comparable results.",
"COMPAGGR performs better because comparing each candidate independently is a better strategy.",
"Conclusion We describe an approach to model documents while incorporating external information that informs the representations learned for the sentences in the document.",
"We implement our approach through an attention mechanism of a neural network architecture for modeling documents.",
"Our experiments with extractive document summarization and answer selection tasks validates our model in two ways: first, we demonstrate that external information is important to guide document modeling for natural language understanding tasks.",
"Our model uses image captions and the title of the document for document summarization, and the query with word overlap features for answer selection and outperforms its counterparts that do not use this information.",
"Second, our external attention mechanism successfully guides the learning of the document representation for the relevant end goal.",
"For answer selection, we show that inserting the query with word overlap features using our external attention mechanism outperforms state-of-the-art systems that naturally also have access to this information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Document Modeling For Sentence Extraction",
"Sentence Extraction Applications",
"Experiments and Results",
"Extractive Document Summarization",
"Answer Selection",
"Conclusion"
]
} | GEM-SciDuet-train-124#paper-1342#slide-4 | XNet Sentence Extractor | 1 e2 3 External
takes document embedding as input
instead of attending to text, attends over external information
softmax over the output produces binary labels | 1 e2 3 External
takes document embedding as input
instead of attending to text, attends over external information
softmax over the output produces binary labels | [] |
GEM-SciDuet-train-124#paper-1342#slide-5 | 1342 | Document Modeling with External Attention for Sentence Extraction | Document modeling is essential to a variety of natural language understanding tasks. We propose to use external information to improve document modeling for problems that can be framed as sentence extraction. We develop a framework composed of a hierarchical document encoder and an attention-based extractor with attention over external information. We evaluate our model on extractive document summarization (where the external information is image captions and the title of the document) and answer selection (where the external information is a question). We show that our model consistently outperforms strong baselines, in terms of both informativeness and fluency (for CNN document summarization) and achieves state-of-the-art results for answer selection on WikiQA and NewsQA. 1 * The first three authors made equal contributions to this paper. The work was done when the second author was visiting Edinburgh. 1 Our TensorFlow code and datasets are publicly available at https://github.com/shashiongithub/ Document-Models-with-Ext-Information. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233
],
"paper_content_text": [
"Introduction Recurrent neural networks have become one of the most widely used models in natural language processing (NLP) .",
"A number of variants of RNNs such as Long Short-Term Memory networks (LSTM; Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit networks (GRU; Cho et al., 2014) have been designed to model text capturing long-term dependencies in problems such as language modeling.",
"However, document modeling, a key to many natural language understanding tasks, is still an open challenge.",
"Recently, some neural network architectures were proposed to capture large context for modeling text (Mikolov and Zweig, 2012; Ghosh et al., 2016; Ji et al., 2015; Wang and Cho, 2016) .",
"Lin et al.",
"(2015) and Yang et al.",
"(2016) proposed a hierarchical RNN network for document-level modeling as well as sentence-level modeling, at the cost of increased computational complexity.",
"Tran et al.",
"(2016) further proposed a contextual language model that considers information at interdocument level.",
"It is challenging to rely only on the document for its understanding, and as such it is not surprising that these models struggle on problems such as document summarization (Cheng and Lapata, 2016; Chen et al., 2016; Nallapati et al., 2017; See et al., 2017; Tan and Wan, 2017) and machine reading comprehension (Trischler et al., 2016; Miller et al., 2016; Weissenborn et al., 2017; Hu et al., 2017; .",
"In this paper, we formalize the use of external information to further guide document modeling for end goals.",
"We present a simple yet effective document modeling framework for sentence extraction that allows machine reading with \"external attention.\"",
"Our model includes a neural hierarchical document encoder (or a machine reader) and a hierarchical attention-based sentence extractor.",
"Our hierarchical document encoder resembles the architectures proposed by Cheng and Lapata (2016) and Narayan et al.",
"(2018) in that it derives the document meaning representation from its sentences and their constituent words.",
"Our novel sentence extractor combines this document meaning representation with an attention mechanism (Bahdanau et al., 2015) over the external information to label sentences from the input document.",
"Our model explicitly biases the extractor with external cues and implicitly biases the encoder through training.",
"We demonstrate the effectiveness of our model on two problems that can be naturally framed as sentence extraction with external information.",
"These two problems, extractive document summarization and answer selection for machine reading comprehension, both require local and global contextual reasoning about a given document.",
"Extractive document summarization systems aim at creating a summary by identifying (and subsequently concatenating) the most important sentences in a document, whereas answer selection systems select the candidate sentence in a document most likely to contain the answer to a query.",
"For document summarization, we exploit the title and image captions which often appear with documents (specifically newswire articles) as external information.",
"For answer selection, we use word overlap features, such as the inverse sentence frequency (ISF, Trischler et al., 2016) and the inverse document frequency (IDF) together with the query, all formulated as external cues.",
"Our main contributions are three-fold: First, our model ensures that sentence extraction is done in a larger (rich) context, i.e., the full document is read first before we start labeling its sentences for extraction, and each sentence labeling is done by implicitly estimating its local and global relevance to the document and by directly attending to some external information for importance cues.",
"Second, while external information has been shown to be useful for summarization systems using traditional hand-crafted features (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) , our model is the first to exploit such information in deep learning-based summarization.",
"We evaluate our models automatically (in terms of ROUGE scores) on the CNN news highlights dataset (Hermann et al., 2015) .",
"Experimental results show that our summarizer, informed with title and image captions, consistently outperforms summarizers that do not use this information.",
"We also conduct a human evaluation to judge which type of summary participants prefer.",
"Our results overwhelmingly show that human subjects find our summaries more informative and complete.",
"Lastly, with the machine reading capabilities of our model, we confirm that a full document needs to be \"read\" to produce high quality extracts allowing a rich contextual reasoning, in contrast to previous answer selection approaches that often measure a score between each sentence in the document and the question and return the sentence with highest score in an isolated manner (Yin et al., 2016; .",
"Our model with ISF and IDF scores as external features achieves competitive results for answer selection.",
"Our ensemble model combining scores from our model and word overlap scores using a logistic regression layer achieves state-ofthe-art results on the popular question answering datasets WikiQA and NewsQA (Trischler et al., 2016) , and it obtains comparable results to the state of the art for SQuAD (Rajpurkar et al., 2016) .",
"We also evaluate our approach on the MSMarco dataset (Nguyen et al., 2016) and elaborate on the behavior of our machine reader in a scenario where each candidate answer sentence is contextually independent of each other.",
"Document Modeling For Sentence Extraction Given a document D consisting of a sequence of n sentences (s 1 , s 2 , ..., s n ) , we aim at labeling each sentence s i in D with a label y i ∈ {0, 1} where y i = 1 indicates that s i is extraction-worthy and 0 otherwise.",
"Our architecture resembles those previously proposed in the literature (Cheng and Lapata, 2016; Nallapati et al., 2017) .",
"The main components include a sentence encoder, a document encoder, and a novel sentence extractor (see Figure 1) that we describe in more detail below.",
"The novel characteristics of our model are that each sentence is labeled by implicitly estimating its (local and global) relevance to the document and by directly attending to some external information for importance cues.",
"Sentence Encoder A core component of our model is a convolutional sentence encoder (Kim, 2014; Kim et al., 2016) which encodes sentences into continuous representations.",
"We use temporal narrow convolution by applying a kernel filter K of width h to a window of h words in sentence s to produce a new feature.",
"This filter is applied to each possible window of words in s to produce a feature map f ∈ R k−h+1 where k is the sentence length.",
"We then apply max-pooling over time over the feature map f and take the maximum value as the feature corresponding to this particular filter K. We use multiple kernels of various sizes and each kernel multiple times to construct the representation of a sentence.",
"In Figure 1 , ker-nels of size 2 (red) and 4 (blue) are applied three times each.",
"The max-pooling over time operation yields two feature lists f K 2 and f K 4 ∈ R 3 .",
"The final sentence embeddings have six dimensions.",
"Document Encoder The document encoder composes a sequence of sentences to obtain a document representation.",
"We use a recurrent neural network with LSTM cells to avoid the vanishing gradient problem when training long sequences (Hochreiter and Schmidhuber, 1997) .",
"Given a document D consisting of a sequence of sentences (s 1 , s 2 , .",
".",
".",
", s n ), we follow common practice and feed the sentences in reverse order (Sutskever et al., 2014; Filippova et al., 2015) .",
"Sentence Extractor Our sentence extractor sequentially labels each sentence in a document with 1 or 0 by implicitly estimating its relevance in the document and by directly attending to the external information for importance cues.",
"It is implemented with another RNN with LSTM cells with an attention mechanism (Bahdanau et al., 2015) and a softmax layer.",
"Our attention mechanism differs from the standard practice of attending intermediate states of the input (encoder).",
"Instead, our extractor attends to a sequence of p pieces of external information E : (e 1 , e 2 , ..., e p ) relevant for the task (e.g., e i is a title or an image caption for summarization) for cues.",
"At time t i , it reads sentence s i and makes a binary prediction, conditioned on the document representation (obtained from the document encoder), the previously labeled sentences and the external information.",
"This way, our labeler is able to identify locally and globally important sentences within the document which correlate well with the external information.",
"Given sentence s t at time step t, it returns a probability distribution over labels as: p(y t |s t , D, E) = softmax(g(h t , h t )) (1) g(h t , h t ) = U o (V h h t + W h h t ) (2) h t = LSTM(s t , h t−1 ) h t = p i=1 α (t,i) e i , where α (t,i) = exp(h t e i ) j exp(h t e j ) where g(·) is a single-layer neural network with parameters U o , V h and W h .",
"h t is an intermedi- Figure 1 : Hierarchical encoder-decoder model for sentence extraction with external attention.",
"s 1 , .",
".",
".",
", s 5 are sentences in the document and, e 1 , e 2 and e 3 represent external information.",
"For the extractive summarization task, e i s are external information such as title and image captions.",
"For the answers selection task, e i s are the query and word overlap features.",
"ate RNN state at time step t. The dynamic context vector h t is essentially the weighted sum of the external information (e 1 , e 2 , .",
".",
".",
", e p ).",
"Figure 1 summarizes our model.",
"Sentence Extraction Applications We validate our model on two sentence extraction problems: extractive document summarization and answer selection for machine reading comprehension.",
"Both these tasks require local and global contextual reasoning about a given document.",
"As such, they test the ability of our model to facilitate document modeling using external information.",
"Extractive Summarization An extractive summarizer aims to produce a summary S by selecting m sentences from D (where m < n).",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i should be included in the summary) by assigning score p(y i |s i , D, E , θ) quantifying the relevance of s i to the summary.",
"We assemble a summary S by selecting m sentences with top p(y i = 1|s i , D, E , θ) scores.",
"We formulate external information E as the sequence of the title and the image captions associated with the document.",
"We use the convolutional sentence encoder to get their sentence-level representations.",
"Answer Selection Given a question q and a document D , the goal of the task is to select one candidate sentence s i ∈ D in which the answer exists.",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i contains the answer) and assign score p(y i |s i , D, E , θ) quantifying s i 's relevance to the query.",
"We return as answer the sentence s i with the highest p(y i = 1|s i , D, E , θ) score.",
"We treat the question q as external information and use the convolutional sentence encoder to get its sentence-level representation.",
"This simplifies Eq.",
"(1) and (2) as follow: p(y t |s t , D, q) = softmax(g(h t , q)) (3) g(h t , q) = U o (V h h t + W q q), where V h and W q are network parameters.",
"We exploit the simplicity of our model to further assimilate external features relevant for answer selection: the inverse sentence frequency (ISF, (Trischler et al., 2016) ), the inverse document frequency (IDF) and a modified version of the ISF score which we call local ISF.",
"Trischler et al.",
"(2016) have shown that a simple ISF baseline (i.e., a sentence with the highest ISF score as an answer) correlates well with the answers.",
"The ISF score α s i for the sentence s i is computed as α s i = w∈s i ∩q IDF(w), where IDF is the inverse document frequency score of word w, defined as: IDF(w) = log N Nw , where N is the total number of sentences in the training set and N w is the number of sentences in which w appears.",
"Note that, s i ∩ q refers to the set of words that appear both in s i and in q.",
"Local ISF is calculated in the same manner as the ISF score, only with setting the total number of sentences (N ) to the number of sentences in the article that is being analyzed.",
"More formally, this modifies Eq.",
"(3) as follows: p(y t |s t , D, q) = softmax(g(h t , q, α t , β t , γ t )), (4) where α t , β t and γ t are the ISF, IDF and local ISF scores (real values) of sentence s t respectively .",
"The function g is calculated as follows: g(h t , q, α t , β t , γ t ) =U o (V h h t + W q q + W isf (α t · 1)+ W idf (β t · 1) + W lisf (γ t · 1) , where W isf , W idf and W lisf are new parameters added to the network and 1 is a vector of 1s of size equal to the sentence embedding size.",
"In Figure 1 , these external feature vectors are represented as 6-dimensional gray vectors accompanied with dashed arrows.",
"Experiments and Results This section presents our experimental setup and results assessing our model in both the extractive summarization and answer selection setups.",
"In the rest of the paper, we refer to our model as XNET for its ability to exploit eXternal information to improve document representation.",
"Extractive Document Summarization Summarization Dataset We evaluated our models on the CNN news highlights dataset (Hermann et al., 2015) .",
"2 We used the standard splits of Hermann et al.",
"(2015) for training, validation, and testing (90,266/1,220/1,093 documents).",
"We followed previous studies (Cheng and Lapata, 2016; Nallapati et al., 2016 Nallapati et al., , 2017 See et al., 2017; Tan and Wan, 2017) in assuming that the \"story highlights\" associated with each article are gold-standard abstractive summaries.",
"We trained our network on a named-entity-anonymized version of news articles.",
"However, we generated deanonymized summaries and evaluated them against gold summaries to facilitate human evaluation and to make human evaluation comparable to automatic evaluation.",
"To train our model, we need documents annotated with sentence extraction information, i.e., each sentence in a document is labeled with 1 (summary-worthy) or 0 (not summary-worthy).",
"We followed Nallapati et al.",
"(2017) and automatically extracted ground truth labels such that all positively labeled sentences from an article collectively give the highest ROUGE (Lin and Hovy, 2003) score with respect to the gold summary.",
"We used a modified script of Hermann et al.",
"(2015) to extract titles and image captions, and we associated them with the corresponding articles.",
"All articles get associated with their titles.",
"The availability of image captions varies from 0 to 414 per article, with an average of 3 image captions.",
"There are 40% CNN articles with at least one image caption.",
"All sentences, including titles and image captions, were padded with zeros to a sentence length of 100.",
"All input documents were padded with zeros to a maximum document length of 126.",
"For each document, we consider a maximum of 10 image captions.",
"We experimented with various numbers (1, 3, 5, 10 and 20) of image captions on the validation set and found that our model performed best with 10 image captions.",
"We refer the reader to the supplementary material for more implementation details to replicate our results.",
"Comparison Systems We compared the output of our model against the standard baseline of simply selecting the first three sentences from each document as the summary.",
"We refer to this baseline as LEAD in the rest of the paper.",
"We also compared our system against the sentence extraction system of Cheng and Lapata (2016) .",
"We refer to this system as POINTERNET as the neural attention architecture in Cheng and Lapata (2016) resembles the one of Pointer Networks .",
"3 It does not exploit any external information.",
"4 The architecture of POINTERNET is closely related to our model without external information.",
"4 Adding external information to POINTERNET is an in- Automatic Evaluation To automatically assess the quality of our summaries, we used ROUGE (Lin and Hovy, 2003) , a recall-oriented metric, to compare our model-generated summaries to manually-written highlights.",
"6 Previous work has reported ROUGE-1 (R1) and ROUGE-2 (R2) scores to access informativeness, and ROUGE-L (RL) to access fluency.",
"In addition to R1, R2 and RL, we also report ROUGE-3 (R3) and ROUGE-4 (R4) capturing higher order n-grams overlap to assess informativeness and fluency simultaneously.",
"teresting direction of research but we do not pursue it here.",
"It requires decoding with multiple types of attentions and this is not the focus of this paper.",
"5 We are unable to compare our results to the extractive system of Nallapati et al.",
"(2017) because they report their results on the DailyMail dataset and their code is not available.",
"The abstractive systems of Chen et al.",
"(2016) and Tan and Wan (2017) report their results on the CNN dataset, however, their results are not comparable to ours as they report on the full-length F1 variants of ROUGE to evaluate their abstractive summaries.",
"We report ROUGE recall scores which is more appropriate to evaluate our extractive summaries.",
"6 We used pyrouge, a Python package, to compute all our ROUGE scores with parameters \"-a -c 95 -m -n 4 -w 1.2.\"",
"We report our results on both full length (three sentences with the top scores as the summary) and fixed length (first 75 bytes and 275 bytes as the summary) summaries.",
"For full length summaries, our decision of selecting three sentences is guided by the fact that there are 3.11 sentences on average in the gold highlights of the training set.",
"We conduct our ablation study on the validation set with full length ROUGE scores, but we report both fixed and full length ROUGE scores for the test set.",
"We experimented with two types of external information: title (TITLE) and image captions (CAPTION).",
"In addition, we experimented with the first sentence (FS) of the document as external information.",
"Note that the latter is not external information, it is a sentence in the document.",
"However, we wanted to explore the idea that the first sentence of the document plays a crucial part in generating summaries (Rush et al., 2015; Nallapati et al., 2016) .",
"XNET with FS acts as a baseline for XNET with title and image captions.",
"We report the performance of several variants of XNET on the validation set in Table 1 .",
"We also compare them against the LEAD baseline and POINTERNET.",
"These two systems do not use any additional information.",
"Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET.",
"When the title (TITLE), image captions (CAPTION) and the first sentence (FS) are used separately as additional information, XNET performs best with TITLE as its external information.",
"Our result demonstrates the importance of the title of the document in extractive summarization (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) .",
"The performance with TITLE and CAP-TION is better than that with FS.",
"We also tried possible combinations of TITLE, CAPTION and FS.",
"All XNET models are superior to the ones without any external information.",
"XNET performs best when TITLE and CAPTION are jointly used as external information (55.4%, 21.8%, 11.8%, 7.5%, and 49.2% for R1, R2, R3, R4, and RL respectively).",
"It is better than the the LEAD baseline by 3.7 points on average and than POINTERNET by 1.8 points on average, indicating that external information is useful to identify the gist of the document.",
"We use this model for testing purposes.",
"Our final results on the test set are shown in to XNET.",
"This result could be because LEAD (always) and POINTERNET (often) include the first sentence in their summaries, whereas, XNET is better capable at selecting sentences from various document positions.",
"This is not captured by smaller summaries of 75 bytes, but it becomes more evident with longer summaries (275 bytes and full length) where XNET performs best across all ROUGE scores.",
"We note that POINTERNET outperforms LEAD for 75-byte summaries, then its performance drops behind LEAD for 275-byte summaries, but then it outperforms LEAD for full length summaries on the metrics R1, R2 and RL.",
"It shows that POINTERNET with its attention over sentences in the document is capable of exploring more than first few sentences in the document, but it is still behind XNET which is better at identifying salient sentences in the document.",
"XNET performs significantly better than POINTERNET by 0.8 points for 275-byte summaries and by 1.9 points for full length summaries, on average for all ROUGE scores.",
"Human Evaluation We complement our automatic evaluation results with human evaluation.",
"We randomly selected 20 articles from the test set.",
"Annotators were presented with a news article and summaries from four different systems.",
"These include the LEAD baseline, POINTERNET, XNET and the human authored highlights.",
"We followed the guidelines in Cheng and Lapata (2016) , and asked our participants to rank the summaries from best (1st) to worst (4th) in order of informativeness (does the summary capture important information in the article?)",
"and fluency (is the summary written in well-formed English?).",
"We did not allow any ties and we only sampled articles with nonidentical summaries.",
"We assigned this task to five annotators who were proficient English speakers.",
"Each annotator was presented with all 20 articles.",
"The order of summaries to rank was randomized per article.",
"An example of summaries our subjects ranked is provided in the supplementary material.",
"The results of our human evaluation study are shown in Table 3 .",
"As one might imagine, HUMAN gets ranked 1st most of the time (41%).",
"However, it is closely followed by XNET which ranked 1st 28% of the time.",
"In comparison, POINTER-NET and LEAD were mostly ranked at 3rd and 4th places.",
"We also carried out pairwise comparisons between all models in Table 3 for their statistical significance using a one-way ANOVA with post-hoc Tukey HSD tests with (p < 0.01).",
"It showed that XNET is significantly better than LEAD and POINTERNET, and it does not differ significantly from HUMAN.",
"On the other hand, POINTERNET does not differ significantly from LEAD and it differs significantly from both XNET and HUMAN.",
"The human evaluation results corroborates our empirical results in Table 1 and Table 2: XNET is better than LEAD and POINT-ERNET in producing informative and fluent summaries.",
"Answer Selection Question Answering Datasets We run experiments on four datasets collected for open domain question-answering tasks: WikiQA , SQuAD (Rajpurkar et al., 2016) , NewsQA (Trischler et al., 2016) , and MSMarco (Nguyen et al., 2016) .",
"NewsQA was especially designed to present lexical and syntactic divergence between questions and answers.",
"It contains 119,633 questions posed by crowdworkers on 12,744 CNN articles previously collected by Hermann et al.",
"(2015) .",
"In a similar manner, SQuAD associates 100,000+ question with a Wikipedia article's first paragraph, for 500+ previously chosen articles.",
"WikiQA was collected by mining web-searching query logs and then associating them with the summary section of the Wikipedia article presumed to be related to the topic of the query.",
"A similar collection procedure was followed to create MSMarco with the difference that each candidate answer is a whole paragraph from a different browsed website associated with the query.",
"We follow the widely used setup of leaving out unanswered questions (Trischler et al., 2016; and adapt the format of each dataset to our task of answer sentence selection by labeling a candidate sentence with 1 if any answer span is contained in that sentence.",
"In the case of MS-Marco, each candidate paragraph comes associated with a label, hence we treat each one as a single long sentence.",
"Since SQuAD keeps the official test dataset hidden and MSMarco does not provide labels for its released test set, we report results on their official validation sets.",
"For validation, we set apart 10% of each official training set.",
"Our dataset splits consist of 92, 525, 5, 165 and 5, 124 samples for NewsQA; 79, 032, 8, 567, and 10, 570 for SQuAD; 873, 122, and 79, 704, 9, 706, and 9, 650 for MSMarco, for training, validation, and testing respectively.",
"Comparison Systems We compared the output of our model against the ISF (Trischler et al., 2016) and LOCALISF baselines.",
"Given an article, the sentence with the highest ISF score is selected as an answer for the ISF baseline and the sentence with the highest local ISF score for the LOCALISF baseline.",
"We also compare our model against a neural network (PAIRCNN) that encodes (question, candidate) in an isolated manner as in previous work (Yin et al., 2016; .",
"The architecture uses the sentence encoder explained in earlier sections to learn the question and candidate representations.",
"The distribution over labels is given by p(y t |q) = p(y t |s t , q) = softmax(g(s t , q)) where g(s t , q) = ReLU(W sq · [s t ; q] + b sq ).",
"In addition, we also compare our model against AP-CNN (dos Santos et al., 2016) , ABCNN (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"We experiment with several variants of our model.",
"XNET is the vanilla version of our sen- SQuAD WikiQA NewsQA MSMarco ACC MAP MRR ACC MAP MRR ACC MAP MRR ACC MAP MRR WRD CNT 77.84 27.50 27.77 51.05 48.91 49.24 44.67 46.48 46.91 20.16 19.37 19.51 WGT WRD CNT 78.43 28.10 28.38 49.79 50.99 51.32 45.24 48.20 48.64 20.50 20.06 20.23 AP-CNN ----68.86 69.57 ------ABCNN ----69.21 71.08 ------L.D.C ----70.58 72.26 ------KV-MemNN ----70.69 72.65 ------LOCALISF 79.50 27.78 28.05 49.79 49.57 50.11 44.69 48.40 46.48 20.21 20.22 20.39 ISF 78.85 28.09 28.36 48.52 46.53 46.72 45.61 48.57 48.99 20.52 20.07 20.23 PAIRCNN 32.53 46.34 46.35 32.49 39.87 38.71 (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"(WGT) WRD CNT stands for the (weighted) word count baseline.",
"See text for more details.",
"tence extractor conditioned only on the query q as external information (Eq.",
"(3) ).",
"XNET+ is an extension of XNET which uses ISF, IDF and local ISF scores in addition to the query q as external information (Eqn.",
"(4) ).",
"We also experimented with a baseline XNETTOPK where we choose the top k sentences with highest ISF score, and then among them choose the one with the highest probability according to XNET.",
"In our experiments, we set k = 5.",
"In the end, we experimented with an ensemble network LRXNET which combines the XNET score, the COMPAGGR score and other word-overlap-based scores (tweaked and optimized for each dataset separately) for each sentence using a logistic regression classifier.",
"It uses ISF and LocalISF scores for NewsQA, IDF and ISF scores for SQuAD, sentence length, IDF and ISF scores for WikiQA, and word overlap and ISF score for MSMarco.",
"We refer the reader to the supplementary material for more implementation and optimization details to replicate our results.",
"Evaluation Metrics We consider metrics that evaluate systems that return a ranked list of candidate answers: mean average precision (MAP), mean reciprocal rank (MRR), and accuracy (ACC).",
"Results Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco.",
"Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate in isolation.",
"Secondly, we can observe that ISF is indeed a strong baseline that outperforms XNET.",
"This means that just \"reading\" the document using a vanilla version of XNET is not sufficient, and help is required through a coarse filtering.",
"Indeed, we observe that XNET+ outperforms all baselines except for COMPAGGR.",
"Our ensemble model LRXNET can ultimately surpass COMPAGGR on majority of the datasets.",
"This consistent behavior validates the machine reading capabilities and the improved document representation with external features of our model for answer selection.",
"Specifically, the combination of document reading and word overlap features is required to be done in a soft manner, using a classification technique.",
"Using it as a hard constraint, with XNETTOPK, does not achieve the best result.",
"We believe that often the ISF score is a better indicator of answer presence in the vicinity of certain candidate instead of in the candidate itself.",
"As such, XNET+ is capable of using this feature in datasets with richer context.",
"It is worth noting that the improvement gained by LRXNET over the state-of-the-art follows a pattern.",
"For the SQuAD dataset, the results are comparable (less than 1%).",
"However, the improvement for WikiQA reaches ∼3% and then the gap shrinks again for NewsQA, with an improvement of ∼1%.",
"This could be explained by the fact that each sample of the SQuAD is a paragraph, compared to an article summary for WikiQA, and to an entire article for NewsQA.",
"Hence, we further strengthen our hypothesis that a richer context is needed to achieve better results, in this case expressed as document length, but as the length of the context increases the limitation of sequential models to learn from long rich sequences arises.",
"7 Interestingly, our model lags behind COM-PAGGR on the MSMarco dataset.",
"It turns out this is due to contextual independence between candidates in the MSMarco dataset, i.e., each candidate is a stand-alone paragraph in this dataset, in contrast to contextually dependent candidate sentences from a document in the NewsQA, SQuAD and WikiQA datasets.",
"As a result, our models (XNET+ and LRXNET) with document reading abilities perform poorly.",
"This can be observed by the fact that XNET and PAIRCNN obtain comparable results.",
"COMPAGGR performs better because comparing each candidate independently is a better strategy.",
"Conclusion We describe an approach to model documents while incorporating external information that informs the representations learned for the sentences in the document.",
"We implement our approach through an attention mechanism of a neural network architecture for modeling documents.",
"Our experiments with extractive document summarization and answer selection tasks validates our model in two ways: first, we demonstrate that external information is important to guide document modeling for natural language understanding tasks.",
"Our model uses image captions and the title of the document for document summarization, and the query with word overlap features for answer selection and outperforms its counterparts that do not use this information.",
"Second, our external attention mechanism successfully guides the learning of the document representation for the relevant end goal.",
"For answer selection, we show that inserting the query with word overlap features using our external attention mechanism outperforms state-of-the-art systems that naturally also have access to this information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Document Modeling For Sentence Extraction",
"Sentence Extraction Applications",
"Experiments and Results",
"Extractive Document Summarization",
"Answer Selection",
"Conclusion"
]
} | GEM-SciDuet-train-124#paper-1342#slide-5 | XNet for Extractive Summarization | e attention over external information
e sentences encoded with the convolutional sentence encoder
e sentences encoded with / \
t C1 c2 External attention
= title T 7 Ti | e attention over external information
e sentences encoded with the convolutional sentence encoder
e sentences encoded with / \
t C1 c2 External attention
= title T 7 Ti | [] |
GEM-SciDuet-train-124#paper-1342#slide-6 | 1342 | Document Modeling with External Attention for Sentence Extraction | Document modeling is essential to a variety of natural language understanding tasks. We propose to use external information to improve document modeling for problems that can be framed as sentence extraction. We develop a framework composed of a hierarchical document encoder and an attention-based extractor with attention over external information. We evaluate our model on extractive document summarization (where the external information is image captions and the title of the document) and answer selection (where the external information is a question). We show that our model consistently outperforms strong baselines, in terms of both informativeness and fluency (for CNN document summarization) and achieves state-of-the-art results for answer selection on WikiQA and NewsQA. 1 * The first three authors made equal contributions to this paper. The work was done when the second author was visiting Edinburgh. 1 Our TensorFlow code and datasets are publicly available at https://github.com/shashiongithub/ Document-Models-with-Ext-Information. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233
],
"paper_content_text": [
"Introduction Recurrent neural networks have become one of the most widely used models in natural language processing (NLP) .",
"A number of variants of RNNs such as Long Short-Term Memory networks (LSTM; Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit networks (GRU; Cho et al., 2014) have been designed to model text capturing long-term dependencies in problems such as language modeling.",
"However, document modeling, a key to many natural language understanding tasks, is still an open challenge.",
"Recently, some neural network architectures were proposed to capture large context for modeling text (Mikolov and Zweig, 2012; Ghosh et al., 2016; Ji et al., 2015; Wang and Cho, 2016) .",
"Lin et al.",
"(2015) and Yang et al.",
"(2016) proposed a hierarchical RNN network for document-level modeling as well as sentence-level modeling, at the cost of increased computational complexity.",
"Tran et al.",
"(2016) further proposed a contextual language model that considers information at interdocument level.",
"It is challenging to rely only on the document for its understanding, and as such it is not surprising that these models struggle on problems such as document summarization (Cheng and Lapata, 2016; Chen et al., 2016; Nallapati et al., 2017; See et al., 2017; Tan and Wan, 2017) and machine reading comprehension (Trischler et al., 2016; Miller et al., 2016; Weissenborn et al., 2017; Hu et al., 2017; .",
"In this paper, we formalize the use of external information to further guide document modeling for end goals.",
"We present a simple yet effective document modeling framework for sentence extraction that allows machine reading with \"external attention.\"",
"Our model includes a neural hierarchical document encoder (or a machine reader) and a hierarchical attention-based sentence extractor.",
"Our hierarchical document encoder resembles the architectures proposed by Cheng and Lapata (2016) and Narayan et al.",
"(2018) in that it derives the document meaning representation from its sentences and their constituent words.",
"Our novel sentence extractor combines this document meaning representation with an attention mechanism (Bahdanau et al., 2015) over the external information to label sentences from the input document.",
"Our model explicitly biases the extractor with external cues and implicitly biases the encoder through training.",
"We demonstrate the effectiveness of our model on two problems that can be naturally framed as sentence extraction with external information.",
"These two problems, extractive document summarization and answer selection for machine reading comprehension, both require local and global contextual reasoning about a given document.",
"Extractive document summarization systems aim at creating a summary by identifying (and subsequently concatenating) the most important sentences in a document, whereas answer selection systems select the candidate sentence in a document most likely to contain the answer to a query.",
"For document summarization, we exploit the title and image captions which often appear with documents (specifically newswire articles) as external information.",
"For answer selection, we use word overlap features, such as the inverse sentence frequency (ISF, Trischler et al., 2016) and the inverse document frequency (IDF) together with the query, all formulated as external cues.",
"Our main contributions are three-fold: First, our model ensures that sentence extraction is done in a larger (rich) context, i.e., the full document is read first before we start labeling its sentences for extraction, and each sentence labeling is done by implicitly estimating its local and global relevance to the document and by directly attending to some external information for importance cues.",
"Second, while external information has been shown to be useful for summarization systems using traditional hand-crafted features (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) , our model is the first to exploit such information in deep learning-based summarization.",
"We evaluate our models automatically (in terms of ROUGE scores) on the CNN news highlights dataset (Hermann et al., 2015) .",
"Experimental results show that our summarizer, informed with title and image captions, consistently outperforms summarizers that do not use this information.",
"We also conduct a human evaluation to judge which type of summary participants prefer.",
"Our results overwhelmingly show that human subjects find our summaries more informative and complete.",
"Lastly, with the machine reading capabilities of our model, we confirm that a full document needs to be \"read\" to produce high quality extracts allowing a rich contextual reasoning, in contrast to previous answer selection approaches that often measure a score between each sentence in the document and the question and return the sentence with highest score in an isolated manner (Yin et al., 2016; .",
"Our model with ISF and IDF scores as external features achieves competitive results for answer selection.",
"Our ensemble model combining scores from our model and word overlap scores using a logistic regression layer achieves state-ofthe-art results on the popular question answering datasets WikiQA and NewsQA (Trischler et al., 2016) , and it obtains comparable results to the state of the art for SQuAD (Rajpurkar et al., 2016) .",
"We also evaluate our approach on the MSMarco dataset (Nguyen et al., 2016) and elaborate on the behavior of our machine reader in a scenario where each candidate answer sentence is contextually independent of each other.",
"Document Modeling For Sentence Extraction Given a document D consisting of a sequence of n sentences (s 1 , s 2 , ..., s n ) , we aim at labeling each sentence s i in D with a label y i ∈ {0, 1} where y i = 1 indicates that s i is extraction-worthy and 0 otherwise.",
"Our architecture resembles those previously proposed in the literature (Cheng and Lapata, 2016; Nallapati et al., 2017) .",
"The main components include a sentence encoder, a document encoder, and a novel sentence extractor (see Figure 1) that we describe in more detail below.",
"The novel characteristics of our model are that each sentence is labeled by implicitly estimating its (local and global) relevance to the document and by directly attending to some external information for importance cues.",
"Sentence Encoder A core component of our model is a convolutional sentence encoder (Kim, 2014; Kim et al., 2016) which encodes sentences into continuous representations.",
"We use temporal narrow convolution by applying a kernel filter K of width h to a window of h words in sentence s to produce a new feature.",
"This filter is applied to each possible window of words in s to produce a feature map f ∈ R k−h+1 where k is the sentence length.",
"We then apply max-pooling over time over the feature map f and take the maximum value as the feature corresponding to this particular filter K. We use multiple kernels of various sizes and each kernel multiple times to construct the representation of a sentence.",
"In Figure 1 , ker-nels of size 2 (red) and 4 (blue) are applied three times each.",
"The max-pooling over time operation yields two feature lists f K 2 and f K 4 ∈ R 3 .",
"The final sentence embeddings have six dimensions.",
"Document Encoder The document encoder composes a sequence of sentences to obtain a document representation.",
"We use a recurrent neural network with LSTM cells to avoid the vanishing gradient problem when training long sequences (Hochreiter and Schmidhuber, 1997) .",
"Given a document D consisting of a sequence of sentences (s 1 , s 2 , .",
".",
".",
", s n ), we follow common practice and feed the sentences in reverse order (Sutskever et al., 2014; Filippova et al., 2015) .",
"Sentence Extractor Our sentence extractor sequentially labels each sentence in a document with 1 or 0 by implicitly estimating its relevance in the document and by directly attending to the external information for importance cues.",
"It is implemented with another RNN with LSTM cells with an attention mechanism (Bahdanau et al., 2015) and a softmax layer.",
"Our attention mechanism differs from the standard practice of attending intermediate states of the input (encoder).",
"Instead, our extractor attends to a sequence of p pieces of external information E : (e 1 , e 2 , ..., e p ) relevant for the task (e.g., e i is a title or an image caption for summarization) for cues.",
"At time t i , it reads sentence s i and makes a binary prediction, conditioned on the document representation (obtained from the document encoder), the previously labeled sentences and the external information.",
"This way, our labeler is able to identify locally and globally important sentences within the document which correlate well with the external information.",
"Given sentence s t at time step t, it returns a probability distribution over labels as: p(y t |s t , D, E) = softmax(g(h t , h t )) (1) g(h t , h t ) = U o (V h h t + W h h t ) (2) h t = LSTM(s t , h t−1 ) h t = p i=1 α (t,i) e i , where α (t,i) = exp(h t e i ) j exp(h t e j ) where g(·) is a single-layer neural network with parameters U o , V h and W h .",
"h t is an intermedi- Figure 1 : Hierarchical encoder-decoder model for sentence extraction with external attention.",
"s 1 , .",
".",
".",
", s 5 are sentences in the document and, e 1 , e 2 and e 3 represent external information.",
"For the extractive summarization task, e i s are external information such as title and image captions.",
"For the answers selection task, e i s are the query and word overlap features.",
"ate RNN state at time step t. The dynamic context vector h t is essentially the weighted sum of the external information (e 1 , e 2 , .",
".",
".",
", e p ).",
"Figure 1 summarizes our model.",
"Sentence Extraction Applications We validate our model on two sentence extraction problems: extractive document summarization and answer selection for machine reading comprehension.",
"Both these tasks require local and global contextual reasoning about a given document.",
"As such, they test the ability of our model to facilitate document modeling using external information.",
"Extractive Summarization An extractive summarizer aims to produce a summary S by selecting m sentences from D (where m < n).",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i should be included in the summary) by assigning score p(y i |s i , D, E , θ) quantifying the relevance of s i to the summary.",
"We assemble a summary S by selecting m sentences with top p(y i = 1|s i , D, E , θ) scores.",
"We formulate external information E as the sequence of the title and the image captions associated with the document.",
"We use the convolutional sentence encoder to get their sentence-level representations.",
"Answer Selection Given a question q and a document D , the goal of the task is to select one candidate sentence s i ∈ D in which the answer exists.",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i contains the answer) and assign score p(y i |s i , D, E , θ) quantifying s i 's relevance to the query.",
"We return as answer the sentence s i with the highest p(y i = 1|s i , D, E , θ) score.",
"We treat the question q as external information and use the convolutional sentence encoder to get its sentence-level representation.",
"This simplifies Eq.",
"(1) and (2) as follow: p(y t |s t , D, q) = softmax(g(h t , q)) (3) g(h t , q) = U o (V h h t + W q q), where V h and W q are network parameters.",
"We exploit the simplicity of our model to further assimilate external features relevant for answer selection: the inverse sentence frequency (ISF, (Trischler et al., 2016) ), the inverse document frequency (IDF) and a modified version of the ISF score which we call local ISF.",
"Trischler et al.",
"(2016) have shown that a simple ISF baseline (i.e., a sentence with the highest ISF score as an answer) correlates well with the answers.",
"The ISF score α s i for the sentence s i is computed as α s i = w∈s i ∩q IDF(w), where IDF is the inverse document frequency score of word w, defined as: IDF(w) = log N Nw , where N is the total number of sentences in the training set and N w is the number of sentences in which w appears.",
"Note that, s i ∩ q refers to the set of words that appear both in s i and in q.",
"Local ISF is calculated in the same manner as the ISF score, only with setting the total number of sentences (N ) to the number of sentences in the article that is being analyzed.",
"More formally, this modifies Eq.",
"(3) as follows: p(y t |s t , D, q) = softmax(g(h t , q, α t , β t , γ t )), (4) where α t , β t and γ t are the ISF, IDF and local ISF scores (real values) of sentence s t respectively .",
"The function g is calculated as follows: g(h t , q, α t , β t , γ t ) =U o (V h h t + W q q + W isf (α t · 1)+ W idf (β t · 1) + W lisf (γ t · 1) , where W isf , W idf and W lisf are new parameters added to the network and 1 is a vector of 1s of size equal to the sentence embedding size.",
"In Figure 1 , these external feature vectors are represented as 6-dimensional gray vectors accompanied with dashed arrows.",
"Experiments and Results This section presents our experimental setup and results assessing our model in both the extractive summarization and answer selection setups.",
"In the rest of the paper, we refer to our model as XNET for its ability to exploit eXternal information to improve document representation.",
"Extractive Document Summarization Summarization Dataset We evaluated our models on the CNN news highlights dataset (Hermann et al., 2015) .",
"2 We used the standard splits of Hermann et al.",
"(2015) for training, validation, and testing (90,266/1,220/1,093 documents).",
"We followed previous studies (Cheng and Lapata, 2016; Nallapati et al., 2016 Nallapati et al., , 2017 See et al., 2017; Tan and Wan, 2017) in assuming that the \"story highlights\" associated with each article are gold-standard abstractive summaries.",
"We trained our network on a named-entity-anonymized version of news articles.",
"However, we generated deanonymized summaries and evaluated them against gold summaries to facilitate human evaluation and to make human evaluation comparable to automatic evaluation.",
"To train our model, we need documents annotated with sentence extraction information, i.e., each sentence in a document is labeled with 1 (summary-worthy) or 0 (not summary-worthy).",
"We followed Nallapati et al.",
"(2017) and automatically extracted ground truth labels such that all positively labeled sentences from an article collectively give the highest ROUGE (Lin and Hovy, 2003) score with respect to the gold summary.",
"We used a modified script of Hermann et al.",
"(2015) to extract titles and image captions, and we associated them with the corresponding articles.",
"All articles get associated with their titles.",
"The availability of image captions varies from 0 to 414 per article, with an average of 3 image captions.",
"There are 40% CNN articles with at least one image caption.",
"All sentences, including titles and image captions, were padded with zeros to a sentence length of 100.",
"All input documents were padded with zeros to a maximum document length of 126.",
"For each document, we consider a maximum of 10 image captions.",
"We experimented with various numbers (1, 3, 5, 10 and 20) of image captions on the validation set and found that our model performed best with 10 image captions.",
"We refer the reader to the supplementary material for more implementation details to replicate our results.",
"Comparison Systems We compared the output of our model against the standard baseline of simply selecting the first three sentences from each document as the summary.",
"We refer to this baseline as LEAD in the rest of the paper.",
"We also compared our system against the sentence extraction system of Cheng and Lapata (2016) .",
"We refer to this system as POINTERNET as the neural attention architecture in Cheng and Lapata (2016) resembles the one of Pointer Networks .",
"3 It does not exploit any external information.",
"4 The architecture of POINTERNET is closely related to our model without external information.",
"4 Adding external information to POINTERNET is an in- Automatic Evaluation To automatically assess the quality of our summaries, we used ROUGE (Lin and Hovy, 2003) , a recall-oriented metric, to compare our model-generated summaries to manually-written highlights.",
"6 Previous work has reported ROUGE-1 (R1) and ROUGE-2 (R2) scores to access informativeness, and ROUGE-L (RL) to access fluency.",
"In addition to R1, R2 and RL, we also report ROUGE-3 (R3) and ROUGE-4 (R4) capturing higher order n-grams overlap to assess informativeness and fluency simultaneously.",
"teresting direction of research but we do not pursue it here.",
"It requires decoding with multiple types of attentions and this is not the focus of this paper.",
"5 We are unable to compare our results to the extractive system of Nallapati et al.",
"(2017) because they report their results on the DailyMail dataset and their code is not available.",
"The abstractive systems of Chen et al.",
"(2016) and Tan and Wan (2017) report their results on the CNN dataset, however, their results are not comparable to ours as they report on the full-length F1 variants of ROUGE to evaluate their abstractive summaries.",
"We report ROUGE recall scores which is more appropriate to evaluate our extractive summaries.",
"6 We used pyrouge, a Python package, to compute all our ROUGE scores with parameters \"-a -c 95 -m -n 4 -w 1.2.\"",
"We report our results on both full length (three sentences with the top scores as the summary) and fixed length (first 75 bytes and 275 bytes as the summary) summaries.",
"For full length summaries, our decision of selecting three sentences is guided by the fact that there are 3.11 sentences on average in the gold highlights of the training set.",
"We conduct our ablation study on the validation set with full length ROUGE scores, but we report both fixed and full length ROUGE scores for the test set.",
"We experimented with two types of external information: title (TITLE) and image captions (CAPTION).",
"In addition, we experimented with the first sentence (FS) of the document as external information.",
"Note that the latter is not external information, it is a sentence in the document.",
"However, we wanted to explore the idea that the first sentence of the document plays a crucial part in generating summaries (Rush et al., 2015; Nallapati et al., 2016) .",
"XNET with FS acts as a baseline for XNET with title and image captions.",
"We report the performance of several variants of XNET on the validation set in Table 1 .",
"We also compare them against the LEAD baseline and POINTERNET.",
"These two systems do not use any additional information.",
"Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET.",
"When the title (TITLE), image captions (CAPTION) and the first sentence (FS) are used separately as additional information, XNET performs best with TITLE as its external information.",
"Our result demonstrates the importance of the title of the document in extractive summarization (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) .",
"The performance with TITLE and CAP-TION is better than that with FS.",
"We also tried possible combinations of TITLE, CAPTION and FS.",
"All XNET models are superior to the ones without any external information.",
"XNET performs best when TITLE and CAPTION are jointly used as external information (55.4%, 21.8%, 11.8%, 7.5%, and 49.2% for R1, R2, R3, R4, and RL respectively).",
"It is better than the the LEAD baseline by 3.7 points on average and than POINTERNET by 1.8 points on average, indicating that external information is useful to identify the gist of the document.",
"We use this model for testing purposes.",
"Our final results on the test set are shown in to XNET.",
"This result could be because LEAD (always) and POINTERNET (often) include the first sentence in their summaries, whereas, XNET is better capable at selecting sentences from various document positions.",
"This is not captured by smaller summaries of 75 bytes, but it becomes more evident with longer summaries (275 bytes and full length) where XNET performs best across all ROUGE scores.",
"We note that POINTERNET outperforms LEAD for 75-byte summaries, then its performance drops behind LEAD for 275-byte summaries, but then it outperforms LEAD for full length summaries on the metrics R1, R2 and RL.",
"It shows that POINTERNET with its attention over sentences in the document is capable of exploring more than first few sentences in the document, but it is still behind XNET which is better at identifying salient sentences in the document.",
"XNET performs significantly better than POINTERNET by 0.8 points for 275-byte summaries and by 1.9 points for full length summaries, on average for all ROUGE scores.",
"Human Evaluation We complement our automatic evaluation results with human evaluation.",
"We randomly selected 20 articles from the test set.",
"Annotators were presented with a news article and summaries from four different systems.",
"These include the LEAD baseline, POINTERNET, XNET and the human authored highlights.",
"We followed the guidelines in Cheng and Lapata (2016) , and asked our participants to rank the summaries from best (1st) to worst (4th) in order of informativeness (does the summary capture important information in the article?)",
"and fluency (is the summary written in well-formed English?).",
"We did not allow any ties and we only sampled articles with nonidentical summaries.",
"We assigned this task to five annotators who were proficient English speakers.",
"Each annotator was presented with all 20 articles.",
"The order of summaries to rank was randomized per article.",
"An example of summaries our subjects ranked is provided in the supplementary material.",
"The results of our human evaluation study are shown in Table 3 .",
"As one might imagine, HUMAN gets ranked 1st most of the time (41%).",
"However, it is closely followed by XNET which ranked 1st 28% of the time.",
"In comparison, POINTER-NET and LEAD were mostly ranked at 3rd and 4th places.",
"We also carried out pairwise comparisons between all models in Table 3 for their statistical significance using a one-way ANOVA with post-hoc Tukey HSD tests with (p < 0.01).",
"It showed that XNET is significantly better than LEAD and POINTERNET, and it does not differ significantly from HUMAN.",
"On the other hand, POINTERNET does not differ significantly from LEAD and it differs significantly from both XNET and HUMAN.",
"The human evaluation results corroborates our empirical results in Table 1 and Table 2: XNET is better than LEAD and POINT-ERNET in producing informative and fluent summaries.",
"Answer Selection Question Answering Datasets We run experiments on four datasets collected for open domain question-answering tasks: WikiQA , SQuAD (Rajpurkar et al., 2016) , NewsQA (Trischler et al., 2016) , and MSMarco (Nguyen et al., 2016) .",
"NewsQA was especially designed to present lexical and syntactic divergence between questions and answers.",
"It contains 119,633 questions posed by crowdworkers on 12,744 CNN articles previously collected by Hermann et al.",
"(2015) .",
"In a similar manner, SQuAD associates 100,000+ question with a Wikipedia article's first paragraph, for 500+ previously chosen articles.",
"WikiQA was collected by mining web-searching query logs and then associating them with the summary section of the Wikipedia article presumed to be related to the topic of the query.",
"A similar collection procedure was followed to create MSMarco with the difference that each candidate answer is a whole paragraph from a different browsed website associated with the query.",
"We follow the widely used setup of leaving out unanswered questions (Trischler et al., 2016; and adapt the format of each dataset to our task of answer sentence selection by labeling a candidate sentence with 1 if any answer span is contained in that sentence.",
"In the case of MS-Marco, each candidate paragraph comes associated with a label, hence we treat each one as a single long sentence.",
"Since SQuAD keeps the official test dataset hidden and MSMarco does not provide labels for its released test set, we report results on their official validation sets.",
"For validation, we set apart 10% of each official training set.",
"Our dataset splits consist of 92, 525, 5, 165 and 5, 124 samples for NewsQA; 79, 032, 8, 567, and 10, 570 for SQuAD; 873, 122, and 79, 704, 9, 706, and 9, 650 for MSMarco, for training, validation, and testing respectively.",
"Comparison Systems We compared the output of our model against the ISF (Trischler et al., 2016) and LOCALISF baselines.",
"Given an article, the sentence with the highest ISF score is selected as an answer for the ISF baseline and the sentence with the highest local ISF score for the LOCALISF baseline.",
"We also compare our model against a neural network (PAIRCNN) that encodes (question, candidate) in an isolated manner as in previous work (Yin et al., 2016; .",
"The architecture uses the sentence encoder explained in earlier sections to learn the question and candidate representations.",
"The distribution over labels is given by p(y t |q) = p(y t |s t , q) = softmax(g(s t , q)) where g(s t , q) = ReLU(W sq · [s t ; q] + b sq ).",
"In addition, we also compare our model against AP-CNN (dos Santos et al., 2016) , ABCNN (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"We experiment with several variants of our model.",
"XNET is the vanilla version of our sen- SQuAD WikiQA NewsQA MSMarco ACC MAP MRR ACC MAP MRR ACC MAP MRR ACC MAP MRR WRD CNT 77.84 27.50 27.77 51.05 48.91 49.24 44.67 46.48 46.91 20.16 19.37 19.51 WGT WRD CNT 78.43 28.10 28.38 49.79 50.99 51.32 45.24 48.20 48.64 20.50 20.06 20.23 AP-CNN ----68.86 69.57 ------ABCNN ----69.21 71.08 ------L.D.C ----70.58 72.26 ------KV-MemNN ----70.69 72.65 ------LOCALISF 79.50 27.78 28.05 49.79 49.57 50.11 44.69 48.40 46.48 20.21 20.22 20.39 ISF 78.85 28.09 28.36 48.52 46.53 46.72 45.61 48.57 48.99 20.52 20.07 20.23 PAIRCNN 32.53 46.34 46.35 32.49 39.87 38.71 (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"(WGT) WRD CNT stands for the (weighted) word count baseline.",
"See text for more details.",
"tence extractor conditioned only on the query q as external information (Eq.",
"(3) ).",
"XNET+ is an extension of XNET which uses ISF, IDF and local ISF scores in addition to the query q as external information (Eqn.",
"(4) ).",
"We also experimented with a baseline XNETTOPK where we choose the top k sentences with highest ISF score, and then among them choose the one with the highest probability according to XNET.",
"In our experiments, we set k = 5.",
"In the end, we experimented with an ensemble network LRXNET which combines the XNET score, the COMPAGGR score and other word-overlap-based scores (tweaked and optimized for each dataset separately) for each sentence using a logistic regression classifier.",
"It uses ISF and LocalISF scores for NewsQA, IDF and ISF scores for SQuAD, sentence length, IDF and ISF scores for WikiQA, and word overlap and ISF score for MSMarco.",
"We refer the reader to the supplementary material for more implementation and optimization details to replicate our results.",
"Evaluation Metrics We consider metrics that evaluate systems that return a ranked list of candidate answers: mean average precision (MAP), mean reciprocal rank (MRR), and accuracy (ACC).",
"Results Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco.",
"Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate in isolation.",
"Secondly, we can observe that ISF is indeed a strong baseline that outperforms XNET.",
"This means that just \"reading\" the document using a vanilla version of XNET is not sufficient, and help is required through a coarse filtering.",
"Indeed, we observe that XNET+ outperforms all baselines except for COMPAGGR.",
"Our ensemble model LRXNET can ultimately surpass COMPAGGR on majority of the datasets.",
"This consistent behavior validates the machine reading capabilities and the improved document representation with external features of our model for answer selection.",
"Specifically, the combination of document reading and word overlap features is required to be done in a soft manner, using a classification technique.",
"Using it as a hard constraint, with XNETTOPK, does not achieve the best result.",
"We believe that often the ISF score is a better indicator of answer presence in the vicinity of certain candidate instead of in the candidate itself.",
"As such, XNET+ is capable of using this feature in datasets with richer context.",
"It is worth noting that the improvement gained by LRXNET over the state-of-the-art follows a pattern.",
"For the SQuAD dataset, the results are comparable (less than 1%).",
"However, the improvement for WikiQA reaches ∼3% and then the gap shrinks again for NewsQA, with an improvement of ∼1%.",
"This could be explained by the fact that each sample of the SQuAD is a paragraph, compared to an article summary for WikiQA, and to an entire article for NewsQA.",
"Hence, we further strengthen our hypothesis that a richer context is needed to achieve better results, in this case expressed as document length, but as the length of the context increases the limitation of sequential models to learn from long rich sequences arises.",
"7 Interestingly, our model lags behind COM-PAGGR on the MSMarco dataset.",
"It turns out this is due to contextual independence between candidates in the MSMarco dataset, i.e., each candidate is a stand-alone paragraph in this dataset, in contrast to contextually dependent candidate sentences from a document in the NewsQA, SQuAD and WikiQA datasets.",
"As a result, our models (XNET+ and LRXNET) with document reading abilities perform poorly.",
"This can be observed by the fact that XNET and PAIRCNN obtain comparable results.",
"COMPAGGR performs better because comparing each candidate independently is a better strategy.",
"Conclusion We describe an approach to model documents while incorporating external information that informs the representations learned for the sentences in the document.",
"We implement our approach through an attention mechanism of a neural network architecture for modeling documents.",
"Our experiments with extractive document summarization and answer selection tasks validates our model in two ways: first, we demonstrate that external information is important to guide document modeling for natural language understanding tasks.",
"Our model uses image captions and the title of the document for document summarization, and the query with word overlap features for answer selection and outperforms its counterparts that do not use this information.",
"Second, our external attention mechanism successfully guides the learning of the document representation for the relevant end goal.",
"For answer selection, we show that inserting the query with word overlap features using our external attention mechanism outperforms state-of-the-art systems that naturally also have access to this information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Document Modeling For Sentence Extraction",
"Sentence Extraction Applications",
"Experiments and Results",
"Extractive Document Summarization",
"Answer Selection",
"Conclusion"
]
} | GEM-SciDuet-train-124#paper-1342#slide-6 | Extractive Summarization Experiments | CNN part of CNN/DailyMail dataset
extract titles (each article has a title) and image captions (avg 3 captions per article; 40% articles have at least one)
oracle summaries: select sentences that give collectively high ROUGE score wrt the gold summary (Nallapati et al., 2017)
summary: 3 top-scoring sentences according to the extractor | CNN part of CNN/DailyMail dataset
extract titles (each article has a title) and image captions (avg 3 captions per article; 40% articles have at least one)
oracle summaries: select sentences that give collectively high ROUGE score wrt the gold summary (Nallapati et al., 2017)
summary: 3 top-scoring sentences according to the extractor | [] |
GEM-SciDuet-train-124#paper-1342#slide-7 | 1342 | Document Modeling with External Attention for Sentence Extraction | Document modeling is essential to a variety of natural language understanding tasks. We propose to use external information to improve document modeling for problems that can be framed as sentence extraction. We develop a framework composed of a hierarchical document encoder and an attention-based extractor with attention over external information. We evaluate our model on extractive document summarization (where the external information is image captions and the title of the document) and answer selection (where the external information is a question). We show that our model consistently outperforms strong baselines, in terms of both informativeness and fluency (for CNN document summarization) and achieves state-of-the-art results for answer selection on WikiQA and NewsQA. 1 * The first three authors made equal contributions to this paper. The work was done when the second author was visiting Edinburgh. 1 Our TensorFlow code and datasets are publicly available at https://github.com/shashiongithub/ Document-Models-with-Ext-Information. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233
],
"paper_content_text": [
"Introduction Recurrent neural networks have become one of the most widely used models in natural language processing (NLP) .",
"A number of variants of RNNs such as Long Short-Term Memory networks (LSTM; Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit networks (GRU; Cho et al., 2014) have been designed to model text capturing long-term dependencies in problems such as language modeling.",
"However, document modeling, a key to many natural language understanding tasks, is still an open challenge.",
"Recently, some neural network architectures were proposed to capture large context for modeling text (Mikolov and Zweig, 2012; Ghosh et al., 2016; Ji et al., 2015; Wang and Cho, 2016) .",
"Lin et al.",
"(2015) and Yang et al.",
"(2016) proposed a hierarchical RNN network for document-level modeling as well as sentence-level modeling, at the cost of increased computational complexity.",
"Tran et al.",
"(2016) further proposed a contextual language model that considers information at interdocument level.",
"It is challenging to rely only on the document for its understanding, and as such it is not surprising that these models struggle on problems such as document summarization (Cheng and Lapata, 2016; Chen et al., 2016; Nallapati et al., 2017; See et al., 2017; Tan and Wan, 2017) and machine reading comprehension (Trischler et al., 2016; Miller et al., 2016; Weissenborn et al., 2017; Hu et al., 2017; .",
"In this paper, we formalize the use of external information to further guide document modeling for end goals.",
"We present a simple yet effective document modeling framework for sentence extraction that allows machine reading with \"external attention.\"",
"Our model includes a neural hierarchical document encoder (or a machine reader) and a hierarchical attention-based sentence extractor.",
"Our hierarchical document encoder resembles the architectures proposed by Cheng and Lapata (2016) and Narayan et al.",
"(2018) in that it derives the document meaning representation from its sentences and their constituent words.",
"Our novel sentence extractor combines this document meaning representation with an attention mechanism (Bahdanau et al., 2015) over the external information to label sentences from the input document.",
"Our model explicitly biases the extractor with external cues and implicitly biases the encoder through training.",
"We demonstrate the effectiveness of our model on two problems that can be naturally framed as sentence extraction with external information.",
"These two problems, extractive document summarization and answer selection for machine reading comprehension, both require local and global contextual reasoning about a given document.",
"Extractive document summarization systems aim at creating a summary by identifying (and subsequently concatenating) the most important sentences in a document, whereas answer selection systems select the candidate sentence in a document most likely to contain the answer to a query.",
"For document summarization, we exploit the title and image captions which often appear with documents (specifically newswire articles) as external information.",
"For answer selection, we use word overlap features, such as the inverse sentence frequency (ISF, Trischler et al., 2016) and the inverse document frequency (IDF) together with the query, all formulated as external cues.",
"Our main contributions are three-fold: First, our model ensures that sentence extraction is done in a larger (rich) context, i.e., the full document is read first before we start labeling its sentences for extraction, and each sentence labeling is done by implicitly estimating its local and global relevance to the document and by directly attending to some external information for importance cues.",
"Second, while external information has been shown to be useful for summarization systems using traditional hand-crafted features (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) , our model is the first to exploit such information in deep learning-based summarization.",
"We evaluate our models automatically (in terms of ROUGE scores) on the CNN news highlights dataset (Hermann et al., 2015) .",
"Experimental results show that our summarizer, informed with title and image captions, consistently outperforms summarizers that do not use this information.",
"We also conduct a human evaluation to judge which type of summary participants prefer.",
"Our results overwhelmingly show that human subjects find our summaries more informative and complete.",
"Lastly, with the machine reading capabilities of our model, we confirm that a full document needs to be \"read\" to produce high quality extracts allowing a rich contextual reasoning, in contrast to previous answer selection approaches that often measure a score between each sentence in the document and the question and return the sentence with highest score in an isolated manner (Yin et al., 2016; .",
"Our model with ISF and IDF scores as external features achieves competitive results for answer selection.",
"Our ensemble model combining scores from our model and word overlap scores using a logistic regression layer achieves state-ofthe-art results on the popular question answering datasets WikiQA and NewsQA (Trischler et al., 2016) , and it obtains comparable results to the state of the art for SQuAD (Rajpurkar et al., 2016) .",
"We also evaluate our approach on the MSMarco dataset (Nguyen et al., 2016) and elaborate on the behavior of our machine reader in a scenario where each candidate answer sentence is contextually independent of each other.",
"Document Modeling For Sentence Extraction Given a document D consisting of a sequence of n sentences (s 1 , s 2 , ..., s n ) , we aim at labeling each sentence s i in D with a label y i ∈ {0, 1} where y i = 1 indicates that s i is extraction-worthy and 0 otherwise.",
"Our architecture resembles those previously proposed in the literature (Cheng and Lapata, 2016; Nallapati et al., 2017) .",
"The main components include a sentence encoder, a document encoder, and a novel sentence extractor (see Figure 1) that we describe in more detail below.",
"The novel characteristics of our model are that each sentence is labeled by implicitly estimating its (local and global) relevance to the document and by directly attending to some external information for importance cues.",
"Sentence Encoder A core component of our model is a convolutional sentence encoder (Kim, 2014; Kim et al., 2016) which encodes sentences into continuous representations.",
"We use temporal narrow convolution by applying a kernel filter K of width h to a window of h words in sentence s to produce a new feature.",
"This filter is applied to each possible window of words in s to produce a feature map f ∈ R k−h+1 where k is the sentence length.",
"We then apply max-pooling over time over the feature map f and take the maximum value as the feature corresponding to this particular filter K. We use multiple kernels of various sizes and each kernel multiple times to construct the representation of a sentence.",
"In Figure 1 , ker-nels of size 2 (red) and 4 (blue) are applied three times each.",
"The max-pooling over time operation yields two feature lists f K 2 and f K 4 ∈ R 3 .",
"The final sentence embeddings have six dimensions.",
"Document Encoder The document encoder composes a sequence of sentences to obtain a document representation.",
"We use a recurrent neural network with LSTM cells to avoid the vanishing gradient problem when training long sequences (Hochreiter and Schmidhuber, 1997) .",
"Given a document D consisting of a sequence of sentences (s 1 , s 2 , .",
".",
".",
", s n ), we follow common practice and feed the sentences in reverse order (Sutskever et al., 2014; Filippova et al., 2015) .",
"Sentence Extractor Our sentence extractor sequentially labels each sentence in a document with 1 or 0 by implicitly estimating its relevance in the document and by directly attending to the external information for importance cues.",
"It is implemented with another RNN with LSTM cells with an attention mechanism (Bahdanau et al., 2015) and a softmax layer.",
"Our attention mechanism differs from the standard practice of attending intermediate states of the input (encoder).",
"Instead, our extractor attends to a sequence of p pieces of external information E : (e 1 , e 2 , ..., e p ) relevant for the task (e.g., e i is a title or an image caption for summarization) for cues.",
"At time t i , it reads sentence s i and makes a binary prediction, conditioned on the document representation (obtained from the document encoder), the previously labeled sentences and the external information.",
"This way, our labeler is able to identify locally and globally important sentences within the document which correlate well with the external information.",
"Given sentence s t at time step t, it returns a probability distribution over labels as: p(y t |s t , D, E) = softmax(g(h t , h t )) (1) g(h t , h t ) = U o (V h h t + W h h t ) (2) h t = LSTM(s t , h t−1 ) h t = p i=1 α (t,i) e i , where α (t,i) = exp(h t e i ) j exp(h t e j ) where g(·) is a single-layer neural network with parameters U o , V h and W h .",
"h t is an intermedi- Figure 1 : Hierarchical encoder-decoder model for sentence extraction with external attention.",
"s 1 , .",
".",
".",
", s 5 are sentences in the document and, e 1 , e 2 and e 3 represent external information.",
"For the extractive summarization task, e i s are external information such as title and image captions.",
"For the answers selection task, e i s are the query and word overlap features.",
"ate RNN state at time step t. The dynamic context vector h t is essentially the weighted sum of the external information (e 1 , e 2 , .",
".",
".",
", e p ).",
"Figure 1 summarizes our model.",
"Sentence Extraction Applications We validate our model on two sentence extraction problems: extractive document summarization and answer selection for machine reading comprehension.",
"Both these tasks require local and global contextual reasoning about a given document.",
"As such, they test the ability of our model to facilitate document modeling using external information.",
"Extractive Summarization An extractive summarizer aims to produce a summary S by selecting m sentences from D (where m < n).",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i should be included in the summary) by assigning score p(y i |s i , D, E , θ) quantifying the relevance of s i to the summary.",
"We assemble a summary S by selecting m sentences with top p(y i = 1|s i , D, E , θ) scores.",
"We formulate external information E as the sequence of the title and the image captions associated with the document.",
"We use the convolutional sentence encoder to get their sentence-level representations.",
"Answer Selection Given a question q and a document D , the goal of the task is to select one candidate sentence s i ∈ D in which the answer exists.",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i contains the answer) and assign score p(y i |s i , D, E , θ) quantifying s i 's relevance to the query.",
"We return as answer the sentence s i with the highest p(y i = 1|s i , D, E , θ) score.",
"We treat the question q as external information and use the convolutional sentence encoder to get its sentence-level representation.",
"This simplifies Eq.",
"(1) and (2) as follow: p(y t |s t , D, q) = softmax(g(h t , q)) (3) g(h t , q) = U o (V h h t + W q q), where V h and W q are network parameters.",
"We exploit the simplicity of our model to further assimilate external features relevant for answer selection: the inverse sentence frequency (ISF, (Trischler et al., 2016) ), the inverse document frequency (IDF) and a modified version of the ISF score which we call local ISF.",
"Trischler et al.",
"(2016) have shown that a simple ISF baseline (i.e., a sentence with the highest ISF score as an answer) correlates well with the answers.",
"The ISF score α s i for the sentence s i is computed as α s i = w∈s i ∩q IDF(w), where IDF is the inverse document frequency score of word w, defined as: IDF(w) = log N Nw , where N is the total number of sentences in the training set and N w is the number of sentences in which w appears.",
"Note that, s i ∩ q refers to the set of words that appear both in s i and in q.",
"Local ISF is calculated in the same manner as the ISF score, only with setting the total number of sentences (N ) to the number of sentences in the article that is being analyzed.",
"More formally, this modifies Eq.",
"(3) as follows: p(y t |s t , D, q) = softmax(g(h t , q, α t , β t , γ t )), (4) where α t , β t and γ t are the ISF, IDF and local ISF scores (real values) of sentence s t respectively .",
"The function g is calculated as follows: g(h t , q, α t , β t , γ t ) =U o (V h h t + W q q + W isf (α t · 1)+ W idf (β t · 1) + W lisf (γ t · 1) , where W isf , W idf and W lisf are new parameters added to the network and 1 is a vector of 1s of size equal to the sentence embedding size.",
"In Figure 1 , these external feature vectors are represented as 6-dimensional gray vectors accompanied with dashed arrows.",
"Experiments and Results This section presents our experimental setup and results assessing our model in both the extractive summarization and answer selection setups.",
"In the rest of the paper, we refer to our model as XNET for its ability to exploit eXternal information to improve document representation.",
"Extractive Document Summarization Summarization Dataset We evaluated our models on the CNN news highlights dataset (Hermann et al., 2015) .",
"2 We used the standard splits of Hermann et al.",
"(2015) for training, validation, and testing (90,266/1,220/1,093 documents).",
"We followed previous studies (Cheng and Lapata, 2016; Nallapati et al., 2016 Nallapati et al., , 2017 See et al., 2017; Tan and Wan, 2017) in assuming that the \"story highlights\" associated with each article are gold-standard abstractive summaries.",
"We trained our network on a named-entity-anonymized version of news articles.",
"However, we generated deanonymized summaries and evaluated them against gold summaries to facilitate human evaluation and to make human evaluation comparable to automatic evaluation.",
"To train our model, we need documents annotated with sentence extraction information, i.e., each sentence in a document is labeled with 1 (summary-worthy) or 0 (not summary-worthy).",
"We followed Nallapati et al.",
"(2017) and automatically extracted ground truth labels such that all positively labeled sentences from an article collectively give the highest ROUGE (Lin and Hovy, 2003) score with respect to the gold summary.",
"We used a modified script of Hermann et al.",
"(2015) to extract titles and image captions, and we associated them with the corresponding articles.",
"All articles get associated with their titles.",
"The availability of image captions varies from 0 to 414 per article, with an average of 3 image captions.",
"There are 40% CNN articles with at least one image caption.",
"All sentences, including titles and image captions, were padded with zeros to a sentence length of 100.",
"All input documents were padded with zeros to a maximum document length of 126.",
"For each document, we consider a maximum of 10 image captions.",
"We experimented with various numbers (1, 3, 5, 10 and 20) of image captions on the validation set and found that our model performed best with 10 image captions.",
"We refer the reader to the supplementary material for more implementation details to replicate our results.",
"Comparison Systems We compared the output of our model against the standard baseline of simply selecting the first three sentences from each document as the summary.",
"We refer to this baseline as LEAD in the rest of the paper.",
"We also compared our system against the sentence extraction system of Cheng and Lapata (2016) .",
"We refer to this system as POINTERNET as the neural attention architecture in Cheng and Lapata (2016) resembles the one of Pointer Networks .",
"3 It does not exploit any external information.",
"4 The architecture of POINTERNET is closely related to our model without external information.",
"4 Adding external information to POINTERNET is an in- Automatic Evaluation To automatically assess the quality of our summaries, we used ROUGE (Lin and Hovy, 2003) , a recall-oriented metric, to compare our model-generated summaries to manually-written highlights.",
"6 Previous work has reported ROUGE-1 (R1) and ROUGE-2 (R2) scores to access informativeness, and ROUGE-L (RL) to access fluency.",
"In addition to R1, R2 and RL, we also report ROUGE-3 (R3) and ROUGE-4 (R4) capturing higher order n-grams overlap to assess informativeness and fluency simultaneously.",
"teresting direction of research but we do not pursue it here.",
"It requires decoding with multiple types of attentions and this is not the focus of this paper.",
"5 We are unable to compare our results to the extractive system of Nallapati et al.",
"(2017) because they report their results on the DailyMail dataset and their code is not available.",
"The abstractive systems of Chen et al.",
"(2016) and Tan and Wan (2017) report their results on the CNN dataset, however, their results are not comparable to ours as they report on the full-length F1 variants of ROUGE to evaluate their abstractive summaries.",
"We report ROUGE recall scores which is more appropriate to evaluate our extractive summaries.",
"6 We used pyrouge, a Python package, to compute all our ROUGE scores with parameters \"-a -c 95 -m -n 4 -w 1.2.\"",
"We report our results on both full length (three sentences with the top scores as the summary) and fixed length (first 75 bytes and 275 bytes as the summary) summaries.",
"For full length summaries, our decision of selecting three sentences is guided by the fact that there are 3.11 sentences on average in the gold highlights of the training set.",
"We conduct our ablation study on the validation set with full length ROUGE scores, but we report both fixed and full length ROUGE scores for the test set.",
"We experimented with two types of external information: title (TITLE) and image captions (CAPTION).",
"In addition, we experimented with the first sentence (FS) of the document as external information.",
"Note that the latter is not external information, it is a sentence in the document.",
"However, we wanted to explore the idea that the first sentence of the document plays a crucial part in generating summaries (Rush et al., 2015; Nallapati et al., 2016) .",
"XNET with FS acts as a baseline for XNET with title and image captions.",
"We report the performance of several variants of XNET on the validation set in Table 1 .",
"We also compare them against the LEAD baseline and POINTERNET.",
"These two systems do not use any additional information.",
"Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET.",
"When the title (TITLE), image captions (CAPTION) and the first sentence (FS) are used separately as additional information, XNET performs best with TITLE as its external information.",
"Our result demonstrates the importance of the title of the document in extractive summarization (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) .",
"The performance with TITLE and CAP-TION is better than that with FS.",
"We also tried possible combinations of TITLE, CAPTION and FS.",
"All XNET models are superior to the ones without any external information.",
"XNET performs best when TITLE and CAPTION are jointly used as external information (55.4%, 21.8%, 11.8%, 7.5%, and 49.2% for R1, R2, R3, R4, and RL respectively).",
"It is better than the the LEAD baseline by 3.7 points on average and than POINTERNET by 1.8 points on average, indicating that external information is useful to identify the gist of the document.",
"We use this model for testing purposes.",
"Our final results on the test set are shown in to XNET.",
"This result could be because LEAD (always) and POINTERNET (often) include the first sentence in their summaries, whereas, XNET is better capable at selecting sentences from various document positions.",
"This is not captured by smaller summaries of 75 bytes, but it becomes more evident with longer summaries (275 bytes and full length) where XNET performs best across all ROUGE scores.",
"We note that POINTERNET outperforms LEAD for 75-byte summaries, then its performance drops behind LEAD for 275-byte summaries, but then it outperforms LEAD for full length summaries on the metrics R1, R2 and RL.",
"It shows that POINTERNET with its attention over sentences in the document is capable of exploring more than first few sentences in the document, but it is still behind XNET which is better at identifying salient sentences in the document.",
"XNET performs significantly better than POINTERNET by 0.8 points for 275-byte summaries and by 1.9 points for full length summaries, on average for all ROUGE scores.",
"Human Evaluation We complement our automatic evaluation results with human evaluation.",
"We randomly selected 20 articles from the test set.",
"Annotators were presented with a news article and summaries from four different systems.",
"These include the LEAD baseline, POINTERNET, XNET and the human authored highlights.",
"We followed the guidelines in Cheng and Lapata (2016) , and asked our participants to rank the summaries from best (1st) to worst (4th) in order of informativeness (does the summary capture important information in the article?)",
"and fluency (is the summary written in well-formed English?).",
"We did not allow any ties and we only sampled articles with nonidentical summaries.",
"We assigned this task to five annotators who were proficient English speakers.",
"Each annotator was presented with all 20 articles.",
"The order of summaries to rank was randomized per article.",
"An example of summaries our subjects ranked is provided in the supplementary material.",
"The results of our human evaluation study are shown in Table 3 .",
"As one might imagine, HUMAN gets ranked 1st most of the time (41%).",
"However, it is closely followed by XNET which ranked 1st 28% of the time.",
"In comparison, POINTER-NET and LEAD were mostly ranked at 3rd and 4th places.",
"We also carried out pairwise comparisons between all models in Table 3 for their statistical significance using a one-way ANOVA with post-hoc Tukey HSD tests with (p < 0.01).",
"It showed that XNET is significantly better than LEAD and POINTERNET, and it does not differ significantly from HUMAN.",
"On the other hand, POINTERNET does not differ significantly from LEAD and it differs significantly from both XNET and HUMAN.",
"The human evaluation results corroborates our empirical results in Table 1 and Table 2: XNET is better than LEAD and POINT-ERNET in producing informative and fluent summaries.",
"Answer Selection Question Answering Datasets We run experiments on four datasets collected for open domain question-answering tasks: WikiQA , SQuAD (Rajpurkar et al., 2016) , NewsQA (Trischler et al., 2016) , and MSMarco (Nguyen et al., 2016) .",
"NewsQA was especially designed to present lexical and syntactic divergence between questions and answers.",
"It contains 119,633 questions posed by crowdworkers on 12,744 CNN articles previously collected by Hermann et al.",
"(2015) .",
"In a similar manner, SQuAD associates 100,000+ question with a Wikipedia article's first paragraph, for 500+ previously chosen articles.",
"WikiQA was collected by mining web-searching query logs and then associating them with the summary section of the Wikipedia article presumed to be related to the topic of the query.",
"A similar collection procedure was followed to create MSMarco with the difference that each candidate answer is a whole paragraph from a different browsed website associated with the query.",
"We follow the widely used setup of leaving out unanswered questions (Trischler et al., 2016; and adapt the format of each dataset to our task of answer sentence selection by labeling a candidate sentence with 1 if any answer span is contained in that sentence.",
"In the case of MS-Marco, each candidate paragraph comes associated with a label, hence we treat each one as a single long sentence.",
"Since SQuAD keeps the official test dataset hidden and MSMarco does not provide labels for its released test set, we report results on their official validation sets.",
"For validation, we set apart 10% of each official training set.",
"Our dataset splits consist of 92, 525, 5, 165 and 5, 124 samples for NewsQA; 79, 032, 8, 567, and 10, 570 for SQuAD; 873, 122, and 79, 704, 9, 706, and 9, 650 for MSMarco, for training, validation, and testing respectively.",
"Comparison Systems We compared the output of our model against the ISF (Trischler et al., 2016) and LOCALISF baselines.",
"Given an article, the sentence with the highest ISF score is selected as an answer for the ISF baseline and the sentence with the highest local ISF score for the LOCALISF baseline.",
"We also compare our model against a neural network (PAIRCNN) that encodes (question, candidate) in an isolated manner as in previous work (Yin et al., 2016; .",
"The architecture uses the sentence encoder explained in earlier sections to learn the question and candidate representations.",
"The distribution over labels is given by p(y t |q) = p(y t |s t , q) = softmax(g(s t , q)) where g(s t , q) = ReLU(W sq · [s t ; q] + b sq ).",
"In addition, we also compare our model against AP-CNN (dos Santos et al., 2016) , ABCNN (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"We experiment with several variants of our model.",
"XNET is the vanilla version of our sen- SQuAD WikiQA NewsQA MSMarco ACC MAP MRR ACC MAP MRR ACC MAP MRR ACC MAP MRR WRD CNT 77.84 27.50 27.77 51.05 48.91 49.24 44.67 46.48 46.91 20.16 19.37 19.51 WGT WRD CNT 78.43 28.10 28.38 49.79 50.99 51.32 45.24 48.20 48.64 20.50 20.06 20.23 AP-CNN ----68.86 69.57 ------ABCNN ----69.21 71.08 ------L.D.C ----70.58 72.26 ------KV-MemNN ----70.69 72.65 ------LOCALISF 79.50 27.78 28.05 49.79 49.57 50.11 44.69 48.40 46.48 20.21 20.22 20.39 ISF 78.85 28.09 28.36 48.52 46.53 46.72 45.61 48.57 48.99 20.52 20.07 20.23 PAIRCNN 32.53 46.34 46.35 32.49 39.87 38.71 (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"(WGT) WRD CNT stands for the (weighted) word count baseline.",
"See text for more details.",
"tence extractor conditioned only on the query q as external information (Eq.",
"(3) ).",
"XNET+ is an extension of XNET which uses ISF, IDF and local ISF scores in addition to the query q as external information (Eqn.",
"(4) ).",
"We also experimented with a baseline XNETTOPK where we choose the top k sentences with highest ISF score, and then among them choose the one with the highest probability according to XNET.",
"In our experiments, we set k = 5.",
"In the end, we experimented with an ensemble network LRXNET which combines the XNET score, the COMPAGGR score and other word-overlap-based scores (tweaked and optimized for each dataset separately) for each sentence using a logistic regression classifier.",
"It uses ISF and LocalISF scores for NewsQA, IDF and ISF scores for SQuAD, sentence length, IDF and ISF scores for WikiQA, and word overlap and ISF score for MSMarco.",
"We refer the reader to the supplementary material for more implementation and optimization details to replicate our results.",
"Evaluation Metrics We consider metrics that evaluate systems that return a ranked list of candidate answers: mean average precision (MAP), mean reciprocal rank (MRR), and accuracy (ACC).",
"Results Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco.",
"Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate in isolation.",
"Secondly, we can observe that ISF is indeed a strong baseline that outperforms XNET.",
"This means that just \"reading\" the document using a vanilla version of XNET is not sufficient, and help is required through a coarse filtering.",
"Indeed, we observe that XNET+ outperforms all baselines except for COMPAGGR.",
"Our ensemble model LRXNET can ultimately surpass COMPAGGR on majority of the datasets.",
"This consistent behavior validates the machine reading capabilities and the improved document representation with external features of our model for answer selection.",
"Specifically, the combination of document reading and word overlap features is required to be done in a soft manner, using a classification technique.",
"Using it as a hard constraint, with XNETTOPK, does not achieve the best result.",
"We believe that often the ISF score is a better indicator of answer presence in the vicinity of certain candidate instead of in the candidate itself.",
"As such, XNET+ is capable of using this feature in datasets with richer context.",
"It is worth noting that the improvement gained by LRXNET over the state-of-the-art follows a pattern.",
"For the SQuAD dataset, the results are comparable (less than 1%).",
"However, the improvement for WikiQA reaches ∼3% and then the gap shrinks again for NewsQA, with an improvement of ∼1%.",
"This could be explained by the fact that each sample of the SQuAD is a paragraph, compared to an article summary for WikiQA, and to an entire article for NewsQA.",
"Hence, we further strengthen our hypothesis that a richer context is needed to achieve better results, in this case expressed as document length, but as the length of the context increases the limitation of sequential models to learn from long rich sequences arises.",
"7 Interestingly, our model lags behind COM-PAGGR on the MSMarco dataset.",
"It turns out this is due to contextual independence between candidates in the MSMarco dataset, i.e., each candidate is a stand-alone paragraph in this dataset, in contrast to contextually dependent candidate sentences from a document in the NewsQA, SQuAD and WikiQA datasets.",
"As a result, our models (XNET+ and LRXNET) with document reading abilities perform poorly.",
"This can be observed by the fact that XNET and PAIRCNN obtain comparable results.",
"COMPAGGR performs better because comparing each candidate independently is a better strategy.",
"Conclusion We describe an approach to model documents while incorporating external information that informs the representations learned for the sentences in the document.",
"We implement our approach through an attention mechanism of a neural network architecture for modeling documents.",
"Our experiments with extractive document summarization and answer selection tasks validates our model in two ways: first, we demonstrate that external information is important to guide document modeling for natural language understanding tasks.",
"Our model uses image captions and the title of the document for document summarization, and the query with word overlap features for answer selection and outperforms its counterparts that do not use this information.",
"Second, our external attention mechanism successfully guides the learning of the document representation for the relevant end goal.",
"For answer selection, we show that inserting the query with word overlap features using our external attention mechanism outperforms state-of-the-art systems that naturally also have access to this information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Document Modeling For Sentence Extraction",
"Sentence Extraction Applications",
"Experiments and Results",
"Extractive Document Summarization",
"Answer Selection",
"Conclusion"
]
} | GEM-SciDuet-train-124#paper-1342#slide-7 | External info helps extractive summarization | Ablation results on validation set (ROUGE recall scores)
PointerNet is Cheng and Lapata (2016)
ROUGE 1 ROUGE 2
Pointer Pointer Net Net | Ablation results on validation set (ROUGE recall scores)
PointerNet is Cheng and Lapata (2016)
ROUGE 1 ROUGE 2
Pointer Pointer Net Net | [] |
GEM-SciDuet-train-124#paper-1342#slide-8 | 1342 | Document Modeling with External Attention for Sentence Extraction | Document modeling is essential to a variety of natural language understanding tasks. We propose to use external information to improve document modeling for problems that can be framed as sentence extraction. We develop a framework composed of a hierarchical document encoder and an attention-based extractor with attention over external information. We evaluate our model on extractive document summarization (where the external information is image captions and the title of the document) and answer selection (where the external information is a question). We show that our model consistently outperforms strong baselines, in terms of both informativeness and fluency (for CNN document summarization) and achieves state-of-the-art results for answer selection on WikiQA and NewsQA. 1 * The first three authors made equal contributions to this paper. The work was done when the second author was visiting Edinburgh. 1 Our TensorFlow code and datasets are publicly available at https://github.com/shashiongithub/ Document-Models-with-Ext-Information. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233
],
"paper_content_text": [
"Introduction Recurrent neural networks have become one of the most widely used models in natural language processing (NLP) .",
"A number of variants of RNNs such as Long Short-Term Memory networks (LSTM; Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit networks (GRU; Cho et al., 2014) have been designed to model text capturing long-term dependencies in problems such as language modeling.",
"However, document modeling, a key to many natural language understanding tasks, is still an open challenge.",
"Recently, some neural network architectures were proposed to capture large context for modeling text (Mikolov and Zweig, 2012; Ghosh et al., 2016; Ji et al., 2015; Wang and Cho, 2016) .",
"Lin et al.",
"(2015) and Yang et al.",
"(2016) proposed a hierarchical RNN network for document-level modeling as well as sentence-level modeling, at the cost of increased computational complexity.",
"Tran et al.",
"(2016) further proposed a contextual language model that considers information at interdocument level.",
"It is challenging to rely only on the document for its understanding, and as such it is not surprising that these models struggle on problems such as document summarization (Cheng and Lapata, 2016; Chen et al., 2016; Nallapati et al., 2017; See et al., 2017; Tan and Wan, 2017) and machine reading comprehension (Trischler et al., 2016; Miller et al., 2016; Weissenborn et al., 2017; Hu et al., 2017; .",
"In this paper, we formalize the use of external information to further guide document modeling for end goals.",
"We present a simple yet effective document modeling framework for sentence extraction that allows machine reading with \"external attention.\"",
"Our model includes a neural hierarchical document encoder (or a machine reader) and a hierarchical attention-based sentence extractor.",
"Our hierarchical document encoder resembles the architectures proposed by Cheng and Lapata (2016) and Narayan et al.",
"(2018) in that it derives the document meaning representation from its sentences and their constituent words.",
"Our novel sentence extractor combines this document meaning representation with an attention mechanism (Bahdanau et al., 2015) over the external information to label sentences from the input document.",
"Our model explicitly biases the extractor with external cues and implicitly biases the encoder through training.",
"We demonstrate the effectiveness of our model on two problems that can be naturally framed as sentence extraction with external information.",
"These two problems, extractive document summarization and answer selection for machine reading comprehension, both require local and global contextual reasoning about a given document.",
"Extractive document summarization systems aim at creating a summary by identifying (and subsequently concatenating) the most important sentences in a document, whereas answer selection systems select the candidate sentence in a document most likely to contain the answer to a query.",
"For document summarization, we exploit the title and image captions which often appear with documents (specifically newswire articles) as external information.",
"For answer selection, we use word overlap features, such as the inverse sentence frequency (ISF, Trischler et al., 2016) and the inverse document frequency (IDF) together with the query, all formulated as external cues.",
"Our main contributions are three-fold: First, our model ensures that sentence extraction is done in a larger (rich) context, i.e., the full document is read first before we start labeling its sentences for extraction, and each sentence labeling is done by implicitly estimating its local and global relevance to the document and by directly attending to some external information for importance cues.",
"Second, while external information has been shown to be useful for summarization systems using traditional hand-crafted features (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) , our model is the first to exploit such information in deep learning-based summarization.",
"We evaluate our models automatically (in terms of ROUGE scores) on the CNN news highlights dataset (Hermann et al., 2015) .",
"Experimental results show that our summarizer, informed with title and image captions, consistently outperforms summarizers that do not use this information.",
"We also conduct a human evaluation to judge which type of summary participants prefer.",
"Our results overwhelmingly show that human subjects find our summaries more informative and complete.",
"Lastly, with the machine reading capabilities of our model, we confirm that a full document needs to be \"read\" to produce high quality extracts allowing a rich contextual reasoning, in contrast to previous answer selection approaches that often measure a score between each sentence in the document and the question and return the sentence with highest score in an isolated manner (Yin et al., 2016; .",
"Our model with ISF and IDF scores as external features achieves competitive results for answer selection.",
"Our ensemble model combining scores from our model and word overlap scores using a logistic regression layer achieves state-ofthe-art results on the popular question answering datasets WikiQA and NewsQA (Trischler et al., 2016) , and it obtains comparable results to the state of the art for SQuAD (Rajpurkar et al., 2016) .",
"We also evaluate our approach on the MSMarco dataset (Nguyen et al., 2016) and elaborate on the behavior of our machine reader in a scenario where each candidate answer sentence is contextually independent of each other.",
"Document Modeling For Sentence Extraction Given a document D consisting of a sequence of n sentences (s 1 , s 2 , ..., s n ) , we aim at labeling each sentence s i in D with a label y i ∈ {0, 1} where y i = 1 indicates that s i is extraction-worthy and 0 otherwise.",
"Our architecture resembles those previously proposed in the literature (Cheng and Lapata, 2016; Nallapati et al., 2017) .",
"The main components include a sentence encoder, a document encoder, and a novel sentence extractor (see Figure 1) that we describe in more detail below.",
"The novel characteristics of our model are that each sentence is labeled by implicitly estimating its (local and global) relevance to the document and by directly attending to some external information for importance cues.",
"Sentence Encoder A core component of our model is a convolutional sentence encoder (Kim, 2014; Kim et al., 2016) which encodes sentences into continuous representations.",
"We use temporal narrow convolution by applying a kernel filter K of width h to a window of h words in sentence s to produce a new feature.",
"This filter is applied to each possible window of words in s to produce a feature map f ∈ R k−h+1 where k is the sentence length.",
"We then apply max-pooling over time over the feature map f and take the maximum value as the feature corresponding to this particular filter K. We use multiple kernels of various sizes and each kernel multiple times to construct the representation of a sentence.",
"In Figure 1 , ker-nels of size 2 (red) and 4 (blue) are applied three times each.",
"The max-pooling over time operation yields two feature lists f K 2 and f K 4 ∈ R 3 .",
"The final sentence embeddings have six dimensions.",
"Document Encoder The document encoder composes a sequence of sentences to obtain a document representation.",
"We use a recurrent neural network with LSTM cells to avoid the vanishing gradient problem when training long sequences (Hochreiter and Schmidhuber, 1997) .",
"Given a document D consisting of a sequence of sentences (s 1 , s 2 , .",
".",
".",
", s n ), we follow common practice and feed the sentences in reverse order (Sutskever et al., 2014; Filippova et al., 2015) .",
"Sentence Extractor Our sentence extractor sequentially labels each sentence in a document with 1 or 0 by implicitly estimating its relevance in the document and by directly attending to the external information for importance cues.",
"It is implemented with another RNN with LSTM cells with an attention mechanism (Bahdanau et al., 2015) and a softmax layer.",
"Our attention mechanism differs from the standard practice of attending intermediate states of the input (encoder).",
"Instead, our extractor attends to a sequence of p pieces of external information E : (e 1 , e 2 , ..., e p ) relevant for the task (e.g., e i is a title or an image caption for summarization) for cues.",
"At time t i , it reads sentence s i and makes a binary prediction, conditioned on the document representation (obtained from the document encoder), the previously labeled sentences and the external information.",
"This way, our labeler is able to identify locally and globally important sentences within the document which correlate well with the external information.",
"Given sentence s t at time step t, it returns a probability distribution over labels as: p(y t |s t , D, E) = softmax(g(h t , h t )) (1) g(h t , h t ) = U o (V h h t + W h h t ) (2) h t = LSTM(s t , h t−1 ) h t = p i=1 α (t,i) e i , where α (t,i) = exp(h t e i ) j exp(h t e j ) where g(·) is a single-layer neural network with parameters U o , V h and W h .",
"h t is an intermedi- Figure 1 : Hierarchical encoder-decoder model for sentence extraction with external attention.",
"s 1 , .",
".",
".",
", s 5 are sentences in the document and, e 1 , e 2 and e 3 represent external information.",
"For the extractive summarization task, e i s are external information such as title and image captions.",
"For the answers selection task, e i s are the query and word overlap features.",
"ate RNN state at time step t. The dynamic context vector h t is essentially the weighted sum of the external information (e 1 , e 2 , .",
".",
".",
", e p ).",
"Figure 1 summarizes our model.",
"Sentence Extraction Applications We validate our model on two sentence extraction problems: extractive document summarization and answer selection for machine reading comprehension.",
"Both these tasks require local and global contextual reasoning about a given document.",
"As such, they test the ability of our model to facilitate document modeling using external information.",
"Extractive Summarization An extractive summarizer aims to produce a summary S by selecting m sentences from D (where m < n).",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i should be included in the summary) by assigning score p(y i |s i , D, E , θ) quantifying the relevance of s i to the summary.",
"We assemble a summary S by selecting m sentences with top p(y i = 1|s i , D, E , θ) scores.",
"We formulate external information E as the sequence of the title and the image captions associated with the document.",
"We use the convolutional sentence encoder to get their sentence-level representations.",
"Answer Selection Given a question q and a document D , the goal of the task is to select one candidate sentence s i ∈ D in which the answer exists.",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i contains the answer) and assign score p(y i |s i , D, E , θ) quantifying s i 's relevance to the query.",
"We return as answer the sentence s i with the highest p(y i = 1|s i , D, E , θ) score.",
"We treat the question q as external information and use the convolutional sentence encoder to get its sentence-level representation.",
"This simplifies Eq.",
"(1) and (2) as follow: p(y t |s t , D, q) = softmax(g(h t , q)) (3) g(h t , q) = U o (V h h t + W q q), where V h and W q are network parameters.",
"We exploit the simplicity of our model to further assimilate external features relevant for answer selection: the inverse sentence frequency (ISF, (Trischler et al., 2016) ), the inverse document frequency (IDF) and a modified version of the ISF score which we call local ISF.",
"Trischler et al.",
"(2016) have shown that a simple ISF baseline (i.e., a sentence with the highest ISF score as an answer) correlates well with the answers.",
"The ISF score α s i for the sentence s i is computed as α s i = w∈s i ∩q IDF(w), where IDF is the inverse document frequency score of word w, defined as: IDF(w) = log N Nw , where N is the total number of sentences in the training set and N w is the number of sentences in which w appears.",
"Note that, s i ∩ q refers to the set of words that appear both in s i and in q.",
"Local ISF is calculated in the same manner as the ISF score, only with setting the total number of sentences (N ) to the number of sentences in the article that is being analyzed.",
"More formally, this modifies Eq.",
"(3) as follows: p(y t |s t , D, q) = softmax(g(h t , q, α t , β t , γ t )), (4) where α t , β t and γ t are the ISF, IDF and local ISF scores (real values) of sentence s t respectively .",
"The function g is calculated as follows: g(h t , q, α t , β t , γ t ) =U o (V h h t + W q q + W isf (α t · 1)+ W idf (β t · 1) + W lisf (γ t · 1) , where W isf , W idf and W lisf are new parameters added to the network and 1 is a vector of 1s of size equal to the sentence embedding size.",
"In Figure 1 , these external feature vectors are represented as 6-dimensional gray vectors accompanied with dashed arrows.",
"Experiments and Results This section presents our experimental setup and results assessing our model in both the extractive summarization and answer selection setups.",
"In the rest of the paper, we refer to our model as XNET for its ability to exploit eXternal information to improve document representation.",
"Extractive Document Summarization Summarization Dataset We evaluated our models on the CNN news highlights dataset (Hermann et al., 2015) .",
"2 We used the standard splits of Hermann et al.",
"(2015) for training, validation, and testing (90,266/1,220/1,093 documents).",
"We followed previous studies (Cheng and Lapata, 2016; Nallapati et al., 2016 Nallapati et al., , 2017 See et al., 2017; Tan and Wan, 2017) in assuming that the \"story highlights\" associated with each article are gold-standard abstractive summaries.",
"We trained our network on a named-entity-anonymized version of news articles.",
"However, we generated deanonymized summaries and evaluated them against gold summaries to facilitate human evaluation and to make human evaluation comparable to automatic evaluation.",
"To train our model, we need documents annotated with sentence extraction information, i.e., each sentence in a document is labeled with 1 (summary-worthy) or 0 (not summary-worthy).",
"We followed Nallapati et al.",
"(2017) and automatically extracted ground truth labels such that all positively labeled sentences from an article collectively give the highest ROUGE (Lin and Hovy, 2003) score with respect to the gold summary.",
"We used a modified script of Hermann et al.",
"(2015) to extract titles and image captions, and we associated them with the corresponding articles.",
"All articles get associated with their titles.",
"The availability of image captions varies from 0 to 414 per article, with an average of 3 image captions.",
"There are 40% CNN articles with at least one image caption.",
"All sentences, including titles and image captions, were padded with zeros to a sentence length of 100.",
"All input documents were padded with zeros to a maximum document length of 126.",
"For each document, we consider a maximum of 10 image captions.",
"We experimented with various numbers (1, 3, 5, 10 and 20) of image captions on the validation set and found that our model performed best with 10 image captions.",
"We refer the reader to the supplementary material for more implementation details to replicate our results.",
"Comparison Systems We compared the output of our model against the standard baseline of simply selecting the first three sentences from each document as the summary.",
"We refer to this baseline as LEAD in the rest of the paper.",
"We also compared our system against the sentence extraction system of Cheng and Lapata (2016) .",
"We refer to this system as POINTERNET as the neural attention architecture in Cheng and Lapata (2016) resembles the one of Pointer Networks .",
"3 It does not exploit any external information.",
"4 The architecture of POINTERNET is closely related to our model without external information.",
"4 Adding external information to POINTERNET is an in- Automatic Evaluation To automatically assess the quality of our summaries, we used ROUGE (Lin and Hovy, 2003) , a recall-oriented metric, to compare our model-generated summaries to manually-written highlights.",
"6 Previous work has reported ROUGE-1 (R1) and ROUGE-2 (R2) scores to access informativeness, and ROUGE-L (RL) to access fluency.",
"In addition to R1, R2 and RL, we also report ROUGE-3 (R3) and ROUGE-4 (R4) capturing higher order n-grams overlap to assess informativeness and fluency simultaneously.",
"teresting direction of research but we do not pursue it here.",
"It requires decoding with multiple types of attentions and this is not the focus of this paper.",
"5 We are unable to compare our results to the extractive system of Nallapati et al.",
"(2017) because they report their results on the DailyMail dataset and their code is not available.",
"The abstractive systems of Chen et al.",
"(2016) and Tan and Wan (2017) report their results on the CNN dataset, however, their results are not comparable to ours as they report on the full-length F1 variants of ROUGE to evaluate their abstractive summaries.",
"We report ROUGE recall scores which is more appropriate to evaluate our extractive summaries.",
"6 We used pyrouge, a Python package, to compute all our ROUGE scores with parameters \"-a -c 95 -m -n 4 -w 1.2.\"",
"We report our results on both full length (three sentences with the top scores as the summary) and fixed length (first 75 bytes and 275 bytes as the summary) summaries.",
"For full length summaries, our decision of selecting three sentences is guided by the fact that there are 3.11 sentences on average in the gold highlights of the training set.",
"We conduct our ablation study on the validation set with full length ROUGE scores, but we report both fixed and full length ROUGE scores for the test set.",
"We experimented with two types of external information: title (TITLE) and image captions (CAPTION).",
"In addition, we experimented with the first sentence (FS) of the document as external information.",
"Note that the latter is not external information, it is a sentence in the document.",
"However, we wanted to explore the idea that the first sentence of the document plays a crucial part in generating summaries (Rush et al., 2015; Nallapati et al., 2016) .",
"XNET with FS acts as a baseline for XNET with title and image captions.",
"We report the performance of several variants of XNET on the validation set in Table 1 .",
"We also compare them against the LEAD baseline and POINTERNET.",
"These two systems do not use any additional information.",
"Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET.",
"When the title (TITLE), image captions (CAPTION) and the first sentence (FS) are used separately as additional information, XNET performs best with TITLE as its external information.",
"Our result demonstrates the importance of the title of the document in extractive summarization (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) .",
"The performance with TITLE and CAP-TION is better than that with FS.",
"We also tried possible combinations of TITLE, CAPTION and FS.",
"All XNET models are superior to the ones without any external information.",
"XNET performs best when TITLE and CAPTION are jointly used as external information (55.4%, 21.8%, 11.8%, 7.5%, and 49.2% for R1, R2, R3, R4, and RL respectively).",
"It is better than the the LEAD baseline by 3.7 points on average and than POINTERNET by 1.8 points on average, indicating that external information is useful to identify the gist of the document.",
"We use this model for testing purposes.",
"Our final results on the test set are shown in to XNET.",
"This result could be because LEAD (always) and POINTERNET (often) include the first sentence in their summaries, whereas, XNET is better capable at selecting sentences from various document positions.",
"This is not captured by smaller summaries of 75 bytes, but it becomes more evident with longer summaries (275 bytes and full length) where XNET performs best across all ROUGE scores.",
"We note that POINTERNET outperforms LEAD for 75-byte summaries, then its performance drops behind LEAD for 275-byte summaries, but then it outperforms LEAD for full length summaries on the metrics R1, R2 and RL.",
"It shows that POINTERNET with its attention over sentences in the document is capable of exploring more than first few sentences in the document, but it is still behind XNET which is better at identifying salient sentences in the document.",
"XNET performs significantly better than POINTERNET by 0.8 points for 275-byte summaries and by 1.9 points for full length summaries, on average for all ROUGE scores.",
"Human Evaluation We complement our automatic evaluation results with human evaluation.",
"We randomly selected 20 articles from the test set.",
"Annotators were presented with a news article and summaries from four different systems.",
"These include the LEAD baseline, POINTERNET, XNET and the human authored highlights.",
"We followed the guidelines in Cheng and Lapata (2016) , and asked our participants to rank the summaries from best (1st) to worst (4th) in order of informativeness (does the summary capture important information in the article?)",
"and fluency (is the summary written in well-formed English?).",
"We did not allow any ties and we only sampled articles with nonidentical summaries.",
"We assigned this task to five annotators who were proficient English speakers.",
"Each annotator was presented with all 20 articles.",
"The order of summaries to rank was randomized per article.",
"An example of summaries our subjects ranked is provided in the supplementary material.",
"The results of our human evaluation study are shown in Table 3 .",
"As one might imagine, HUMAN gets ranked 1st most of the time (41%).",
"However, it is closely followed by XNET which ranked 1st 28% of the time.",
"In comparison, POINTER-NET and LEAD were mostly ranked at 3rd and 4th places.",
"We also carried out pairwise comparisons between all models in Table 3 for their statistical significance using a one-way ANOVA with post-hoc Tukey HSD tests with (p < 0.01).",
"It showed that XNET is significantly better than LEAD and POINTERNET, and it does not differ significantly from HUMAN.",
"On the other hand, POINTERNET does not differ significantly from LEAD and it differs significantly from both XNET and HUMAN.",
"The human evaluation results corroborates our empirical results in Table 1 and Table 2: XNET is better than LEAD and POINT-ERNET in producing informative and fluent summaries.",
"Answer Selection Question Answering Datasets We run experiments on four datasets collected for open domain question-answering tasks: WikiQA , SQuAD (Rajpurkar et al., 2016) , NewsQA (Trischler et al., 2016) , and MSMarco (Nguyen et al., 2016) .",
"NewsQA was especially designed to present lexical and syntactic divergence between questions and answers.",
"It contains 119,633 questions posed by crowdworkers on 12,744 CNN articles previously collected by Hermann et al.",
"(2015) .",
"In a similar manner, SQuAD associates 100,000+ question with a Wikipedia article's first paragraph, for 500+ previously chosen articles.",
"WikiQA was collected by mining web-searching query logs and then associating them with the summary section of the Wikipedia article presumed to be related to the topic of the query.",
"A similar collection procedure was followed to create MSMarco with the difference that each candidate answer is a whole paragraph from a different browsed website associated with the query.",
"We follow the widely used setup of leaving out unanswered questions (Trischler et al., 2016; and adapt the format of each dataset to our task of answer sentence selection by labeling a candidate sentence with 1 if any answer span is contained in that sentence.",
"In the case of MS-Marco, each candidate paragraph comes associated with a label, hence we treat each one as a single long sentence.",
"Since SQuAD keeps the official test dataset hidden and MSMarco does not provide labels for its released test set, we report results on their official validation sets.",
"For validation, we set apart 10% of each official training set.",
"Our dataset splits consist of 92, 525, 5, 165 and 5, 124 samples for NewsQA; 79, 032, 8, 567, and 10, 570 for SQuAD; 873, 122, and 79, 704, 9, 706, and 9, 650 for MSMarco, for training, validation, and testing respectively.",
"Comparison Systems We compared the output of our model against the ISF (Trischler et al., 2016) and LOCALISF baselines.",
"Given an article, the sentence with the highest ISF score is selected as an answer for the ISF baseline and the sentence with the highest local ISF score for the LOCALISF baseline.",
"We also compare our model against a neural network (PAIRCNN) that encodes (question, candidate) in an isolated manner as in previous work (Yin et al., 2016; .",
"The architecture uses the sentence encoder explained in earlier sections to learn the question and candidate representations.",
"The distribution over labels is given by p(y t |q) = p(y t |s t , q) = softmax(g(s t , q)) where g(s t , q) = ReLU(W sq · [s t ; q] + b sq ).",
"In addition, we also compare our model against AP-CNN (dos Santos et al., 2016) , ABCNN (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"We experiment with several variants of our model.",
"XNET is the vanilla version of our sen- SQuAD WikiQA NewsQA MSMarco ACC MAP MRR ACC MAP MRR ACC MAP MRR ACC MAP MRR WRD CNT 77.84 27.50 27.77 51.05 48.91 49.24 44.67 46.48 46.91 20.16 19.37 19.51 WGT WRD CNT 78.43 28.10 28.38 49.79 50.99 51.32 45.24 48.20 48.64 20.50 20.06 20.23 AP-CNN ----68.86 69.57 ------ABCNN ----69.21 71.08 ------L.D.C ----70.58 72.26 ------KV-MemNN ----70.69 72.65 ------LOCALISF 79.50 27.78 28.05 49.79 49.57 50.11 44.69 48.40 46.48 20.21 20.22 20.39 ISF 78.85 28.09 28.36 48.52 46.53 46.72 45.61 48.57 48.99 20.52 20.07 20.23 PAIRCNN 32.53 46.34 46.35 32.49 39.87 38.71 (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"(WGT) WRD CNT stands for the (weighted) word count baseline.",
"See text for more details.",
"tence extractor conditioned only on the query q as external information (Eq.",
"(3) ).",
"XNET+ is an extension of XNET which uses ISF, IDF and local ISF scores in addition to the query q as external information (Eqn.",
"(4) ).",
"We also experimented with a baseline XNETTOPK where we choose the top k sentences with highest ISF score, and then among them choose the one with the highest probability according to XNET.",
"In our experiments, we set k = 5.",
"In the end, we experimented with an ensemble network LRXNET which combines the XNET score, the COMPAGGR score and other word-overlap-based scores (tweaked and optimized for each dataset separately) for each sentence using a logistic regression classifier.",
"It uses ISF and LocalISF scores for NewsQA, IDF and ISF scores for SQuAD, sentence length, IDF and ISF scores for WikiQA, and word overlap and ISF score for MSMarco.",
"We refer the reader to the supplementary material for more implementation and optimization details to replicate our results.",
"Evaluation Metrics We consider metrics that evaluate systems that return a ranked list of candidate answers: mean average precision (MAP), mean reciprocal rank (MRR), and accuracy (ACC).",
"Results Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco.",
"Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate in isolation.",
"Secondly, we can observe that ISF is indeed a strong baseline that outperforms XNET.",
"This means that just \"reading\" the document using a vanilla version of XNET is not sufficient, and help is required through a coarse filtering.",
"Indeed, we observe that XNET+ outperforms all baselines except for COMPAGGR.",
"Our ensemble model LRXNET can ultimately surpass COMPAGGR on majority of the datasets.",
"This consistent behavior validates the machine reading capabilities and the improved document representation with external features of our model for answer selection.",
"Specifically, the combination of document reading and word overlap features is required to be done in a soft manner, using a classification technique.",
"Using it as a hard constraint, with XNETTOPK, does not achieve the best result.",
"We believe that often the ISF score is a better indicator of answer presence in the vicinity of certain candidate instead of in the candidate itself.",
"As such, XNET+ is capable of using this feature in datasets with richer context.",
"It is worth noting that the improvement gained by LRXNET over the state-of-the-art follows a pattern.",
"For the SQuAD dataset, the results are comparable (less than 1%).",
"However, the improvement for WikiQA reaches ∼3% and then the gap shrinks again for NewsQA, with an improvement of ∼1%.",
"This could be explained by the fact that each sample of the SQuAD is a paragraph, compared to an article summary for WikiQA, and to an entire article for NewsQA.",
"Hence, we further strengthen our hypothesis that a richer context is needed to achieve better results, in this case expressed as document length, but as the length of the context increases the limitation of sequential models to learn from long rich sequences arises.",
"7 Interestingly, our model lags behind COM-PAGGR on the MSMarco dataset.",
"It turns out this is due to contextual independence between candidates in the MSMarco dataset, i.e., each candidate is a stand-alone paragraph in this dataset, in contrast to contextually dependent candidate sentences from a document in the NewsQA, SQuAD and WikiQA datasets.",
"As a result, our models (XNET+ and LRXNET) with document reading abilities perform poorly.",
"This can be observed by the fact that XNET and PAIRCNN obtain comparable results.",
"COMPAGGR performs better because comparing each candidate independently is a better strategy.",
"Conclusion We describe an approach to model documents while incorporating external information that informs the representations learned for the sentences in the document.",
"We implement our approach through an attention mechanism of a neural network architecture for modeling documents.",
"Our experiments with extractive document summarization and answer selection tasks validates our model in two ways: first, we demonstrate that external information is important to guide document modeling for natural language understanding tasks.",
"Our model uses image captions and the title of the document for document summarization, and the query with word overlap features for answer selection and outperforms its counterparts that do not use this information.",
"Second, our external attention mechanism successfully guides the learning of the document representation for the relevant end goal.",
"For answer selection, we show that inserting the query with word overlap features using our external attention mechanism outperforms state-of-the-art systems that naturally also have access to this information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Document Modeling For Sentence Extraction",
"Sentence Extraction Applications",
"Experiments and Results",
"Extractive Document Summarization",
"Answer Selection",
"Conclusion"
]
} | GEM-SciDuet-train-124#paper-1342#slide-8 | Results | model ROUGE 1 ROUGE 2 ROUGE L
(XNet is XNet+title+caption, PointerNet is Cheng and Lapata (2016))
consistent behavior on SQuAD, WikiQA and MSMarco
consistent behavior on all metrics
Wrd Cnt: word count
Wgt Wrd Cnt: weighted word count
PairCNN: encode (question, candidate sent) isolated | model ROUGE 1 ROUGE 2 ROUGE L
(XNet is XNet+title+caption, PointerNet is Cheng and Lapata (2016))
consistent behavior on SQuAD, WikiQA and MSMarco
consistent behavior on all metrics
Wrd Cnt: word count
Wgt Wrd Cnt: weighted word count
PairCNN: encode (question, candidate sent) isolated | [] |
GEM-SciDuet-train-124#paper-1342#slide-9 | 1342 | Document Modeling with External Attention for Sentence Extraction | Document modeling is essential to a variety of natural language understanding tasks. We propose to use external information to improve document modeling for problems that can be framed as sentence extraction. We develop a framework composed of a hierarchical document encoder and an attention-based extractor with attention over external information. We evaluate our model on extractive document summarization (where the external information is image captions and the title of the document) and answer selection (where the external information is a question). We show that our model consistently outperforms strong baselines, in terms of both informativeness and fluency (for CNN document summarization) and achieves state-of-the-art results for answer selection on WikiQA and NewsQA. 1 * The first three authors made equal contributions to this paper. The work was done when the second author was visiting Edinburgh. 1 Our TensorFlow code and datasets are publicly available at https://github.com/shashiongithub/ Document-Models-with-Ext-Information. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233
],
"paper_content_text": [
"Introduction Recurrent neural networks have become one of the most widely used models in natural language processing (NLP) .",
"A number of variants of RNNs such as Long Short-Term Memory networks (LSTM; Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit networks (GRU; Cho et al., 2014) have been designed to model text capturing long-term dependencies in problems such as language modeling.",
"However, document modeling, a key to many natural language understanding tasks, is still an open challenge.",
"Recently, some neural network architectures were proposed to capture large context for modeling text (Mikolov and Zweig, 2012; Ghosh et al., 2016; Ji et al., 2015; Wang and Cho, 2016) .",
"Lin et al.",
"(2015) and Yang et al.",
"(2016) proposed a hierarchical RNN network for document-level modeling as well as sentence-level modeling, at the cost of increased computational complexity.",
"Tran et al.",
"(2016) further proposed a contextual language model that considers information at interdocument level.",
"It is challenging to rely only on the document for its understanding, and as such it is not surprising that these models struggle on problems such as document summarization (Cheng and Lapata, 2016; Chen et al., 2016; Nallapati et al., 2017; See et al., 2017; Tan and Wan, 2017) and machine reading comprehension (Trischler et al., 2016; Miller et al., 2016; Weissenborn et al., 2017; Hu et al., 2017; .",
"In this paper, we formalize the use of external information to further guide document modeling for end goals.",
"We present a simple yet effective document modeling framework for sentence extraction that allows machine reading with \"external attention.\"",
"Our model includes a neural hierarchical document encoder (or a machine reader) and a hierarchical attention-based sentence extractor.",
"Our hierarchical document encoder resembles the architectures proposed by Cheng and Lapata (2016) and Narayan et al.",
"(2018) in that it derives the document meaning representation from its sentences and their constituent words.",
"Our novel sentence extractor combines this document meaning representation with an attention mechanism (Bahdanau et al., 2015) over the external information to label sentences from the input document.",
"Our model explicitly biases the extractor with external cues and implicitly biases the encoder through training.",
"We demonstrate the effectiveness of our model on two problems that can be naturally framed as sentence extraction with external information.",
"These two problems, extractive document summarization and answer selection for machine reading comprehension, both require local and global contextual reasoning about a given document.",
"Extractive document summarization systems aim at creating a summary by identifying (and subsequently concatenating) the most important sentences in a document, whereas answer selection systems select the candidate sentence in a document most likely to contain the answer to a query.",
"For document summarization, we exploit the title and image captions which often appear with documents (specifically newswire articles) as external information.",
"For answer selection, we use word overlap features, such as the inverse sentence frequency (ISF, Trischler et al., 2016) and the inverse document frequency (IDF) together with the query, all formulated as external cues.",
"Our main contributions are three-fold: First, our model ensures that sentence extraction is done in a larger (rich) context, i.e., the full document is read first before we start labeling its sentences for extraction, and each sentence labeling is done by implicitly estimating its local and global relevance to the document and by directly attending to some external information for importance cues.",
"Second, while external information has been shown to be useful for summarization systems using traditional hand-crafted features (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) , our model is the first to exploit such information in deep learning-based summarization.",
"We evaluate our models automatically (in terms of ROUGE scores) on the CNN news highlights dataset (Hermann et al., 2015) .",
"Experimental results show that our summarizer, informed with title and image captions, consistently outperforms summarizers that do not use this information.",
"We also conduct a human evaluation to judge which type of summary participants prefer.",
"Our results overwhelmingly show that human subjects find our summaries more informative and complete.",
"Lastly, with the machine reading capabilities of our model, we confirm that a full document needs to be \"read\" to produce high quality extracts allowing a rich contextual reasoning, in contrast to previous answer selection approaches that often measure a score between each sentence in the document and the question and return the sentence with highest score in an isolated manner (Yin et al., 2016; .",
"Our model with ISF and IDF scores as external features achieves competitive results for answer selection.",
"Our ensemble model combining scores from our model and word overlap scores using a logistic regression layer achieves state-ofthe-art results on the popular question answering datasets WikiQA and NewsQA (Trischler et al., 2016) , and it obtains comparable results to the state of the art for SQuAD (Rajpurkar et al., 2016) .",
"We also evaluate our approach on the MSMarco dataset (Nguyen et al., 2016) and elaborate on the behavior of our machine reader in a scenario where each candidate answer sentence is contextually independent of each other.",
"Document Modeling For Sentence Extraction Given a document D consisting of a sequence of n sentences (s 1 , s 2 , ..., s n ) , we aim at labeling each sentence s i in D with a label y i ∈ {0, 1} where y i = 1 indicates that s i is extraction-worthy and 0 otherwise.",
"Our architecture resembles those previously proposed in the literature (Cheng and Lapata, 2016; Nallapati et al., 2017) .",
"The main components include a sentence encoder, a document encoder, and a novel sentence extractor (see Figure 1) that we describe in more detail below.",
"The novel characteristics of our model are that each sentence is labeled by implicitly estimating its (local and global) relevance to the document and by directly attending to some external information for importance cues.",
"Sentence Encoder A core component of our model is a convolutional sentence encoder (Kim, 2014; Kim et al., 2016) which encodes sentences into continuous representations.",
"We use temporal narrow convolution by applying a kernel filter K of width h to a window of h words in sentence s to produce a new feature.",
"This filter is applied to each possible window of words in s to produce a feature map f ∈ R k−h+1 where k is the sentence length.",
"We then apply max-pooling over time over the feature map f and take the maximum value as the feature corresponding to this particular filter K. We use multiple kernels of various sizes and each kernel multiple times to construct the representation of a sentence.",
"In Figure 1 , ker-nels of size 2 (red) and 4 (blue) are applied three times each.",
"The max-pooling over time operation yields two feature lists f K 2 and f K 4 ∈ R 3 .",
"The final sentence embeddings have six dimensions.",
"Document Encoder The document encoder composes a sequence of sentences to obtain a document representation.",
"We use a recurrent neural network with LSTM cells to avoid the vanishing gradient problem when training long sequences (Hochreiter and Schmidhuber, 1997) .",
"Given a document D consisting of a sequence of sentences (s 1 , s 2 , .",
".",
".",
", s n ), we follow common practice and feed the sentences in reverse order (Sutskever et al., 2014; Filippova et al., 2015) .",
"Sentence Extractor Our sentence extractor sequentially labels each sentence in a document with 1 or 0 by implicitly estimating its relevance in the document and by directly attending to the external information for importance cues.",
"It is implemented with another RNN with LSTM cells with an attention mechanism (Bahdanau et al., 2015) and a softmax layer.",
"Our attention mechanism differs from the standard practice of attending intermediate states of the input (encoder).",
"Instead, our extractor attends to a sequence of p pieces of external information E : (e 1 , e 2 , ..., e p ) relevant for the task (e.g., e i is a title or an image caption for summarization) for cues.",
"At time t i , it reads sentence s i and makes a binary prediction, conditioned on the document representation (obtained from the document encoder), the previously labeled sentences and the external information.",
"This way, our labeler is able to identify locally and globally important sentences within the document which correlate well with the external information.",
"Given sentence s t at time step t, it returns a probability distribution over labels as: p(y t |s t , D, E) = softmax(g(h t , h t )) (1) g(h t , h t ) = U o (V h h t + W h h t ) (2) h t = LSTM(s t , h t−1 ) h t = p i=1 α (t,i) e i , where α (t,i) = exp(h t e i ) j exp(h t e j ) where g(·) is a single-layer neural network with parameters U o , V h and W h .",
"h t is an intermedi- Figure 1 : Hierarchical encoder-decoder model for sentence extraction with external attention.",
"s 1 , .",
".",
".",
", s 5 are sentences in the document and, e 1 , e 2 and e 3 represent external information.",
"For the extractive summarization task, e i s are external information such as title and image captions.",
"For the answers selection task, e i s are the query and word overlap features.",
"ate RNN state at time step t. The dynamic context vector h t is essentially the weighted sum of the external information (e 1 , e 2 , .",
".",
".",
", e p ).",
"Figure 1 summarizes our model.",
"Sentence Extraction Applications We validate our model on two sentence extraction problems: extractive document summarization and answer selection for machine reading comprehension.",
"Both these tasks require local and global contextual reasoning about a given document.",
"As such, they test the ability of our model to facilitate document modeling using external information.",
"Extractive Summarization An extractive summarizer aims to produce a summary S by selecting m sentences from D (where m < n).",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i should be included in the summary) by assigning score p(y i |s i , D, E , θ) quantifying the relevance of s i to the summary.",
"We assemble a summary S by selecting m sentences with top p(y i = 1|s i , D, E , θ) scores.",
"We formulate external information E as the sequence of the title and the image captions associated with the document.",
"We use the convolutional sentence encoder to get their sentence-level representations.",
"Answer Selection Given a question q and a document D , the goal of the task is to select one candidate sentence s i ∈ D in which the answer exists.",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i contains the answer) and assign score p(y i |s i , D, E , θ) quantifying s i 's relevance to the query.",
"We return as answer the sentence s i with the highest p(y i = 1|s i , D, E , θ) score.",
"We treat the question q as external information and use the convolutional sentence encoder to get its sentence-level representation.",
"This simplifies Eq.",
"(1) and (2) as follow: p(y t |s t , D, q) = softmax(g(h t , q)) (3) g(h t , q) = U o (V h h t + W q q), where V h and W q are network parameters.",
"We exploit the simplicity of our model to further assimilate external features relevant for answer selection: the inverse sentence frequency (ISF, (Trischler et al., 2016) ), the inverse document frequency (IDF) and a modified version of the ISF score which we call local ISF.",
"Trischler et al.",
"(2016) have shown that a simple ISF baseline (i.e., a sentence with the highest ISF score as an answer) correlates well with the answers.",
"The ISF score α s i for the sentence s i is computed as α s i = w∈s i ∩q IDF(w), where IDF is the inverse document frequency score of word w, defined as: IDF(w) = log N Nw , where N is the total number of sentences in the training set and N w is the number of sentences in which w appears.",
"Note that, s i ∩ q refers to the set of words that appear both in s i and in q.",
"Local ISF is calculated in the same manner as the ISF score, only with setting the total number of sentences (N ) to the number of sentences in the article that is being analyzed.",
"More formally, this modifies Eq.",
"(3) as follows: p(y t |s t , D, q) = softmax(g(h t , q, α t , β t , γ t )), (4) where α t , β t and γ t are the ISF, IDF and local ISF scores (real values) of sentence s t respectively .",
"The function g is calculated as follows: g(h t , q, α t , β t , γ t ) =U o (V h h t + W q q + W isf (α t · 1)+ W idf (β t · 1) + W lisf (γ t · 1) , where W isf , W idf and W lisf are new parameters added to the network and 1 is a vector of 1s of size equal to the sentence embedding size.",
"In Figure 1 , these external feature vectors are represented as 6-dimensional gray vectors accompanied with dashed arrows.",
"Experiments and Results This section presents our experimental setup and results assessing our model in both the extractive summarization and answer selection setups.",
"In the rest of the paper, we refer to our model as XNET for its ability to exploit eXternal information to improve document representation.",
"Extractive Document Summarization Summarization Dataset We evaluated our models on the CNN news highlights dataset (Hermann et al., 2015) .",
"2 We used the standard splits of Hermann et al.",
"(2015) for training, validation, and testing (90,266/1,220/1,093 documents).",
"We followed previous studies (Cheng and Lapata, 2016; Nallapati et al., 2016 Nallapati et al., , 2017 See et al., 2017; Tan and Wan, 2017) in assuming that the \"story highlights\" associated with each article are gold-standard abstractive summaries.",
"We trained our network on a named-entity-anonymized version of news articles.",
"However, we generated deanonymized summaries and evaluated them against gold summaries to facilitate human evaluation and to make human evaluation comparable to automatic evaluation.",
"To train our model, we need documents annotated with sentence extraction information, i.e., each sentence in a document is labeled with 1 (summary-worthy) or 0 (not summary-worthy).",
"We followed Nallapati et al.",
"(2017) and automatically extracted ground truth labels such that all positively labeled sentences from an article collectively give the highest ROUGE (Lin and Hovy, 2003) score with respect to the gold summary.",
"We used a modified script of Hermann et al.",
"(2015) to extract titles and image captions, and we associated them with the corresponding articles.",
"All articles get associated with their titles.",
"The availability of image captions varies from 0 to 414 per article, with an average of 3 image captions.",
"There are 40% CNN articles with at least one image caption.",
"All sentences, including titles and image captions, were padded with zeros to a sentence length of 100.",
"All input documents were padded with zeros to a maximum document length of 126.",
"For each document, we consider a maximum of 10 image captions.",
"We experimented with various numbers (1, 3, 5, 10 and 20) of image captions on the validation set and found that our model performed best with 10 image captions.",
"We refer the reader to the supplementary material for more implementation details to replicate our results.",
"Comparison Systems We compared the output of our model against the standard baseline of simply selecting the first three sentences from each document as the summary.",
"We refer to this baseline as LEAD in the rest of the paper.",
"We also compared our system against the sentence extraction system of Cheng and Lapata (2016) .",
"We refer to this system as POINTERNET as the neural attention architecture in Cheng and Lapata (2016) resembles the one of Pointer Networks .",
"3 It does not exploit any external information.",
"4 The architecture of POINTERNET is closely related to our model without external information.",
"4 Adding external information to POINTERNET is an in- Automatic Evaluation To automatically assess the quality of our summaries, we used ROUGE (Lin and Hovy, 2003) , a recall-oriented metric, to compare our model-generated summaries to manually-written highlights.",
"6 Previous work has reported ROUGE-1 (R1) and ROUGE-2 (R2) scores to access informativeness, and ROUGE-L (RL) to access fluency.",
"In addition to R1, R2 and RL, we also report ROUGE-3 (R3) and ROUGE-4 (R4) capturing higher order n-grams overlap to assess informativeness and fluency simultaneously.",
"teresting direction of research but we do not pursue it here.",
"It requires decoding with multiple types of attentions and this is not the focus of this paper.",
"5 We are unable to compare our results to the extractive system of Nallapati et al.",
"(2017) because they report their results on the DailyMail dataset and their code is not available.",
"The abstractive systems of Chen et al.",
"(2016) and Tan and Wan (2017) report their results on the CNN dataset, however, their results are not comparable to ours as they report on the full-length F1 variants of ROUGE to evaluate their abstractive summaries.",
"We report ROUGE recall scores which is more appropriate to evaluate our extractive summaries.",
"6 We used pyrouge, a Python package, to compute all our ROUGE scores with parameters \"-a -c 95 -m -n 4 -w 1.2.\"",
"We report our results on both full length (three sentences with the top scores as the summary) and fixed length (first 75 bytes and 275 bytes as the summary) summaries.",
"For full length summaries, our decision of selecting three sentences is guided by the fact that there are 3.11 sentences on average in the gold highlights of the training set.",
"We conduct our ablation study on the validation set with full length ROUGE scores, but we report both fixed and full length ROUGE scores for the test set.",
"We experimented with two types of external information: title (TITLE) and image captions (CAPTION).",
"In addition, we experimented with the first sentence (FS) of the document as external information.",
"Note that the latter is not external information, it is a sentence in the document.",
"However, we wanted to explore the idea that the first sentence of the document plays a crucial part in generating summaries (Rush et al., 2015; Nallapati et al., 2016) .",
"XNET with FS acts as a baseline for XNET with title and image captions.",
"We report the performance of several variants of XNET on the validation set in Table 1 .",
"We also compare them against the LEAD baseline and POINTERNET.",
"These two systems do not use any additional information.",
"Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET.",
"When the title (TITLE), image captions (CAPTION) and the first sentence (FS) are used separately as additional information, XNET performs best with TITLE as its external information.",
"Our result demonstrates the importance of the title of the document in extractive summarization (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) .",
"The performance with TITLE and CAP-TION is better than that with FS.",
"We also tried possible combinations of TITLE, CAPTION and FS.",
"All XNET models are superior to the ones without any external information.",
"XNET performs best when TITLE and CAPTION are jointly used as external information (55.4%, 21.8%, 11.8%, 7.5%, and 49.2% for R1, R2, R3, R4, and RL respectively).",
"It is better than the the LEAD baseline by 3.7 points on average and than POINTERNET by 1.8 points on average, indicating that external information is useful to identify the gist of the document.",
"We use this model for testing purposes.",
"Our final results on the test set are shown in to XNET.",
"This result could be because LEAD (always) and POINTERNET (often) include the first sentence in their summaries, whereas, XNET is better capable at selecting sentences from various document positions.",
"This is not captured by smaller summaries of 75 bytes, but it becomes more evident with longer summaries (275 bytes and full length) where XNET performs best across all ROUGE scores.",
"We note that POINTERNET outperforms LEAD for 75-byte summaries, then its performance drops behind LEAD for 275-byte summaries, but then it outperforms LEAD for full length summaries on the metrics R1, R2 and RL.",
"It shows that POINTERNET with its attention over sentences in the document is capable of exploring more than first few sentences in the document, but it is still behind XNET which is better at identifying salient sentences in the document.",
"XNET performs significantly better than POINTERNET by 0.8 points for 275-byte summaries and by 1.9 points for full length summaries, on average for all ROUGE scores.",
"Human Evaluation We complement our automatic evaluation results with human evaluation.",
"We randomly selected 20 articles from the test set.",
"Annotators were presented with a news article and summaries from four different systems.",
"These include the LEAD baseline, POINTERNET, XNET and the human authored highlights.",
"We followed the guidelines in Cheng and Lapata (2016) , and asked our participants to rank the summaries from best (1st) to worst (4th) in order of informativeness (does the summary capture important information in the article?)",
"and fluency (is the summary written in well-formed English?).",
"We did not allow any ties and we only sampled articles with nonidentical summaries.",
"We assigned this task to five annotators who were proficient English speakers.",
"Each annotator was presented with all 20 articles.",
"The order of summaries to rank was randomized per article.",
"An example of summaries our subjects ranked is provided in the supplementary material.",
"The results of our human evaluation study are shown in Table 3 .",
"As one might imagine, HUMAN gets ranked 1st most of the time (41%).",
"However, it is closely followed by XNET which ranked 1st 28% of the time.",
"In comparison, POINTER-NET and LEAD were mostly ranked at 3rd and 4th places.",
"We also carried out pairwise comparisons between all models in Table 3 for their statistical significance using a one-way ANOVA with post-hoc Tukey HSD tests with (p < 0.01).",
"It showed that XNET is significantly better than LEAD and POINTERNET, and it does not differ significantly from HUMAN.",
"On the other hand, POINTERNET does not differ significantly from LEAD and it differs significantly from both XNET and HUMAN.",
"The human evaluation results corroborates our empirical results in Table 1 and Table 2: XNET is better than LEAD and POINT-ERNET in producing informative and fluent summaries.",
"Answer Selection Question Answering Datasets We run experiments on four datasets collected for open domain question-answering tasks: WikiQA , SQuAD (Rajpurkar et al., 2016) , NewsQA (Trischler et al., 2016) , and MSMarco (Nguyen et al., 2016) .",
"NewsQA was especially designed to present lexical and syntactic divergence between questions and answers.",
"It contains 119,633 questions posed by crowdworkers on 12,744 CNN articles previously collected by Hermann et al.",
"(2015) .",
"In a similar manner, SQuAD associates 100,000+ question with a Wikipedia article's first paragraph, for 500+ previously chosen articles.",
"WikiQA was collected by mining web-searching query logs and then associating them with the summary section of the Wikipedia article presumed to be related to the topic of the query.",
"A similar collection procedure was followed to create MSMarco with the difference that each candidate answer is a whole paragraph from a different browsed website associated with the query.",
"We follow the widely used setup of leaving out unanswered questions (Trischler et al., 2016; and adapt the format of each dataset to our task of answer sentence selection by labeling a candidate sentence with 1 if any answer span is contained in that sentence.",
"In the case of MS-Marco, each candidate paragraph comes associated with a label, hence we treat each one as a single long sentence.",
"Since SQuAD keeps the official test dataset hidden and MSMarco does not provide labels for its released test set, we report results on their official validation sets.",
"For validation, we set apart 10% of each official training set.",
"Our dataset splits consist of 92, 525, 5, 165 and 5, 124 samples for NewsQA; 79, 032, 8, 567, and 10, 570 for SQuAD; 873, 122, and 79, 704, 9, 706, and 9, 650 for MSMarco, for training, validation, and testing respectively.",
"Comparison Systems We compared the output of our model against the ISF (Trischler et al., 2016) and LOCALISF baselines.",
"Given an article, the sentence with the highest ISF score is selected as an answer for the ISF baseline and the sentence with the highest local ISF score for the LOCALISF baseline.",
"We also compare our model against a neural network (PAIRCNN) that encodes (question, candidate) in an isolated manner as in previous work (Yin et al., 2016; .",
"The architecture uses the sentence encoder explained in earlier sections to learn the question and candidate representations.",
"The distribution over labels is given by p(y t |q) = p(y t |s t , q) = softmax(g(s t , q)) where g(s t , q) = ReLU(W sq · [s t ; q] + b sq ).",
"In addition, we also compare our model against AP-CNN (dos Santos et al., 2016) , ABCNN (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"We experiment with several variants of our model.",
"XNET is the vanilla version of our sen- SQuAD WikiQA NewsQA MSMarco ACC MAP MRR ACC MAP MRR ACC MAP MRR ACC MAP MRR WRD CNT 77.84 27.50 27.77 51.05 48.91 49.24 44.67 46.48 46.91 20.16 19.37 19.51 WGT WRD CNT 78.43 28.10 28.38 49.79 50.99 51.32 45.24 48.20 48.64 20.50 20.06 20.23 AP-CNN ----68.86 69.57 ------ABCNN ----69.21 71.08 ------L.D.C ----70.58 72.26 ------KV-MemNN ----70.69 72.65 ------LOCALISF 79.50 27.78 28.05 49.79 49.57 50.11 44.69 48.40 46.48 20.21 20.22 20.39 ISF 78.85 28.09 28.36 48.52 46.53 46.72 45.61 48.57 48.99 20.52 20.07 20.23 PAIRCNN 32.53 46.34 46.35 32.49 39.87 38.71 (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"(WGT) WRD CNT stands for the (weighted) word count baseline.",
"See text for more details.",
"tence extractor conditioned only on the query q as external information (Eq.",
"(3) ).",
"XNET+ is an extension of XNET which uses ISF, IDF and local ISF scores in addition to the query q as external information (Eqn.",
"(4) ).",
"We also experimented with a baseline XNETTOPK where we choose the top k sentences with highest ISF score, and then among them choose the one with the highest probability according to XNET.",
"In our experiments, we set k = 5.",
"In the end, we experimented with an ensemble network LRXNET which combines the XNET score, the COMPAGGR score and other word-overlap-based scores (tweaked and optimized for each dataset separately) for each sentence using a logistic regression classifier.",
"It uses ISF and LocalISF scores for NewsQA, IDF and ISF scores for SQuAD, sentence length, IDF and ISF scores for WikiQA, and word overlap and ISF score for MSMarco.",
"We refer the reader to the supplementary material for more implementation and optimization details to replicate our results.",
"Evaluation Metrics We consider metrics that evaluate systems that return a ranked list of candidate answers: mean average precision (MAP), mean reciprocal rank (MRR), and accuracy (ACC).",
"Results Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco.",
"Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate in isolation.",
"Secondly, we can observe that ISF is indeed a strong baseline that outperforms XNET.",
"This means that just \"reading\" the document using a vanilla version of XNET is not sufficient, and help is required through a coarse filtering.",
"Indeed, we observe that XNET+ outperforms all baselines except for COMPAGGR.",
"Our ensemble model LRXNET can ultimately surpass COMPAGGR on majority of the datasets.",
"This consistent behavior validates the machine reading capabilities and the improved document representation with external features of our model for answer selection.",
"Specifically, the combination of document reading and word overlap features is required to be done in a soft manner, using a classification technique.",
"Using it as a hard constraint, with XNETTOPK, does not achieve the best result.",
"We believe that often the ISF score is a better indicator of answer presence in the vicinity of certain candidate instead of in the candidate itself.",
"As such, XNET+ is capable of using this feature in datasets with richer context.",
"It is worth noting that the improvement gained by LRXNET over the state-of-the-art follows a pattern.",
"For the SQuAD dataset, the results are comparable (less than 1%).",
"However, the improvement for WikiQA reaches ∼3% and then the gap shrinks again for NewsQA, with an improvement of ∼1%.",
"This could be explained by the fact that each sample of the SQuAD is a paragraph, compared to an article summary for WikiQA, and to an entire article for NewsQA.",
"Hence, we further strengthen our hypothesis that a richer context is needed to achieve better results, in this case expressed as document length, but as the length of the context increases the limitation of sequential models to learn from long rich sequences arises.",
"7 Interestingly, our model lags behind COM-PAGGR on the MSMarco dataset.",
"It turns out this is due to contextual independence between candidates in the MSMarco dataset, i.e., each candidate is a stand-alone paragraph in this dataset, in contrast to contextually dependent candidate sentences from a document in the NewsQA, SQuAD and WikiQA datasets.",
"As a result, our models (XNET+ and LRXNET) with document reading abilities perform poorly.",
"This can be observed by the fact that XNET and PAIRCNN obtain comparable results.",
"COMPAGGR performs better because comparing each candidate independently is a better strategy.",
"Conclusion We describe an approach to model documents while incorporating external information that informs the representations learned for the sentences in the document.",
"We implement our approach through an attention mechanism of a neural network architecture for modeling documents.",
"Our experiments with extractive document summarization and answer selection tasks validates our model in two ways: first, we demonstrate that external information is important to guide document modeling for natural language understanding tasks.",
"Our model uses image captions and the title of the document for document summarization, and the query with word overlap features for answer selection and outperforms its counterparts that do not use this information.",
"Second, our external attention mechanism successfully guides the learning of the document representation for the relevant end goal.",
"For answer selection, we show that inserting the query with word overlap features using our external attention mechanism outperforms state-of-the-art systems that naturally also have access to this information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Document Modeling For Sentence Extraction",
"Sentence Extraction Applications",
"Experiments and Results",
"Extractive Document Summarization",
"Answer Selection",
"Conclusion"
]
} | GEM-SciDuet-train-124#paper-1342#slide-9 | Human evaluation confirms the quality of generated summaries | model 1st 2nd 3rd 4th
annotators ranked systems from best (1st) to worst (4th) | model 1st 2nd 3rd 4th
annotators ranked systems from best (1st) to worst (4th) | [] |
GEM-SciDuet-train-124#paper-1342#slide-10 | 1342 | Document Modeling with External Attention for Sentence Extraction | Document modeling is essential to a variety of natural language understanding tasks. We propose to use external information to improve document modeling for problems that can be framed as sentence extraction. We develop a framework composed of a hierarchical document encoder and an attention-based extractor with attention over external information. We evaluate our model on extractive document summarization (where the external information is image captions and the title of the document) and answer selection (where the external information is a question). We show that our model consistently outperforms strong baselines, in terms of both informativeness and fluency (for CNN document summarization) and achieves state-of-the-art results for answer selection on WikiQA and NewsQA. 1 * The first three authors made equal contributions to this paper. The work was done when the second author was visiting Edinburgh. 1 Our TensorFlow code and datasets are publicly available at https://github.com/shashiongithub/ Document-Models-with-Ext-Information. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233
],
"paper_content_text": [
"Introduction Recurrent neural networks have become one of the most widely used models in natural language processing (NLP) .",
"A number of variants of RNNs such as Long Short-Term Memory networks (LSTM; Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit networks (GRU; Cho et al., 2014) have been designed to model text capturing long-term dependencies in problems such as language modeling.",
"However, document modeling, a key to many natural language understanding tasks, is still an open challenge.",
"Recently, some neural network architectures were proposed to capture large context for modeling text (Mikolov and Zweig, 2012; Ghosh et al., 2016; Ji et al., 2015; Wang and Cho, 2016) .",
"Lin et al.",
"(2015) and Yang et al.",
"(2016) proposed a hierarchical RNN network for document-level modeling as well as sentence-level modeling, at the cost of increased computational complexity.",
"Tran et al.",
"(2016) further proposed a contextual language model that considers information at interdocument level.",
"It is challenging to rely only on the document for its understanding, and as such it is not surprising that these models struggle on problems such as document summarization (Cheng and Lapata, 2016; Chen et al., 2016; Nallapati et al., 2017; See et al., 2017; Tan and Wan, 2017) and machine reading comprehension (Trischler et al., 2016; Miller et al., 2016; Weissenborn et al., 2017; Hu et al., 2017; .",
"In this paper, we formalize the use of external information to further guide document modeling for end goals.",
"We present a simple yet effective document modeling framework for sentence extraction that allows machine reading with \"external attention.\"",
"Our model includes a neural hierarchical document encoder (or a machine reader) and a hierarchical attention-based sentence extractor.",
"Our hierarchical document encoder resembles the architectures proposed by Cheng and Lapata (2016) and Narayan et al.",
"(2018) in that it derives the document meaning representation from its sentences and their constituent words.",
"Our novel sentence extractor combines this document meaning representation with an attention mechanism (Bahdanau et al., 2015) over the external information to label sentences from the input document.",
"Our model explicitly biases the extractor with external cues and implicitly biases the encoder through training.",
"We demonstrate the effectiveness of our model on two problems that can be naturally framed as sentence extraction with external information.",
"These two problems, extractive document summarization and answer selection for machine reading comprehension, both require local and global contextual reasoning about a given document.",
"Extractive document summarization systems aim at creating a summary by identifying (and subsequently concatenating) the most important sentences in a document, whereas answer selection systems select the candidate sentence in a document most likely to contain the answer to a query.",
"For document summarization, we exploit the title and image captions which often appear with documents (specifically newswire articles) as external information.",
"For answer selection, we use word overlap features, such as the inverse sentence frequency (ISF, Trischler et al., 2016) and the inverse document frequency (IDF) together with the query, all formulated as external cues.",
"Our main contributions are three-fold: First, our model ensures that sentence extraction is done in a larger (rich) context, i.e., the full document is read first before we start labeling its sentences for extraction, and each sentence labeling is done by implicitly estimating its local and global relevance to the document and by directly attending to some external information for importance cues.",
"Second, while external information has been shown to be useful for summarization systems using traditional hand-crafted features (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) , our model is the first to exploit such information in deep learning-based summarization.",
"We evaluate our models automatically (in terms of ROUGE scores) on the CNN news highlights dataset (Hermann et al., 2015) .",
"Experimental results show that our summarizer, informed with title and image captions, consistently outperforms summarizers that do not use this information.",
"We also conduct a human evaluation to judge which type of summary participants prefer.",
"Our results overwhelmingly show that human subjects find our summaries more informative and complete.",
"Lastly, with the machine reading capabilities of our model, we confirm that a full document needs to be \"read\" to produce high quality extracts allowing a rich contextual reasoning, in contrast to previous answer selection approaches that often measure a score between each sentence in the document and the question and return the sentence with highest score in an isolated manner (Yin et al., 2016; .",
"Our model with ISF and IDF scores as external features achieves competitive results for answer selection.",
"Our ensemble model combining scores from our model and word overlap scores using a logistic regression layer achieves state-ofthe-art results on the popular question answering datasets WikiQA and NewsQA (Trischler et al., 2016) , and it obtains comparable results to the state of the art for SQuAD (Rajpurkar et al., 2016) .",
"We also evaluate our approach on the MSMarco dataset (Nguyen et al., 2016) and elaborate on the behavior of our machine reader in a scenario where each candidate answer sentence is contextually independent of each other.",
"Document Modeling For Sentence Extraction Given a document D consisting of a sequence of n sentences (s 1 , s 2 , ..., s n ) , we aim at labeling each sentence s i in D with a label y i ∈ {0, 1} where y i = 1 indicates that s i is extraction-worthy and 0 otherwise.",
"Our architecture resembles those previously proposed in the literature (Cheng and Lapata, 2016; Nallapati et al., 2017) .",
"The main components include a sentence encoder, a document encoder, and a novel sentence extractor (see Figure 1) that we describe in more detail below.",
"The novel characteristics of our model are that each sentence is labeled by implicitly estimating its (local and global) relevance to the document and by directly attending to some external information for importance cues.",
"Sentence Encoder A core component of our model is a convolutional sentence encoder (Kim, 2014; Kim et al., 2016) which encodes sentences into continuous representations.",
"We use temporal narrow convolution by applying a kernel filter K of width h to a window of h words in sentence s to produce a new feature.",
"This filter is applied to each possible window of words in s to produce a feature map f ∈ R k−h+1 where k is the sentence length.",
"We then apply max-pooling over time over the feature map f and take the maximum value as the feature corresponding to this particular filter K. We use multiple kernels of various sizes and each kernel multiple times to construct the representation of a sentence.",
"In Figure 1 , ker-nels of size 2 (red) and 4 (blue) are applied three times each.",
"The max-pooling over time operation yields two feature lists f K 2 and f K 4 ∈ R 3 .",
"The final sentence embeddings have six dimensions.",
"Document Encoder The document encoder composes a sequence of sentences to obtain a document representation.",
"We use a recurrent neural network with LSTM cells to avoid the vanishing gradient problem when training long sequences (Hochreiter and Schmidhuber, 1997) .",
"Given a document D consisting of a sequence of sentences (s 1 , s 2 , .",
".",
".",
", s n ), we follow common practice and feed the sentences in reverse order (Sutskever et al., 2014; Filippova et al., 2015) .",
"Sentence Extractor Our sentence extractor sequentially labels each sentence in a document with 1 or 0 by implicitly estimating its relevance in the document and by directly attending to the external information for importance cues.",
"It is implemented with another RNN with LSTM cells with an attention mechanism (Bahdanau et al., 2015) and a softmax layer.",
"Our attention mechanism differs from the standard practice of attending intermediate states of the input (encoder).",
"Instead, our extractor attends to a sequence of p pieces of external information E : (e 1 , e 2 , ..., e p ) relevant for the task (e.g., e i is a title or an image caption for summarization) for cues.",
"At time t i , it reads sentence s i and makes a binary prediction, conditioned on the document representation (obtained from the document encoder), the previously labeled sentences and the external information.",
"This way, our labeler is able to identify locally and globally important sentences within the document which correlate well with the external information.",
"Given sentence s t at time step t, it returns a probability distribution over labels as: p(y t |s t , D, E) = softmax(g(h t , h t )) (1) g(h t , h t ) = U o (V h h t + W h h t ) (2) h t = LSTM(s t , h t−1 ) h t = p i=1 α (t,i) e i , where α (t,i) = exp(h t e i ) j exp(h t e j ) where g(·) is a single-layer neural network with parameters U o , V h and W h .",
"h t is an intermedi- Figure 1 : Hierarchical encoder-decoder model for sentence extraction with external attention.",
"s 1 , .",
".",
".",
", s 5 are sentences in the document and, e 1 , e 2 and e 3 represent external information.",
"For the extractive summarization task, e i s are external information such as title and image captions.",
"For the answers selection task, e i s are the query and word overlap features.",
"ate RNN state at time step t. The dynamic context vector h t is essentially the weighted sum of the external information (e 1 , e 2 , .",
".",
".",
", e p ).",
"Figure 1 summarizes our model.",
"Sentence Extraction Applications We validate our model on two sentence extraction problems: extractive document summarization and answer selection for machine reading comprehension.",
"Both these tasks require local and global contextual reasoning about a given document.",
"As such, they test the ability of our model to facilitate document modeling using external information.",
"Extractive Summarization An extractive summarizer aims to produce a summary S by selecting m sentences from D (where m < n).",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i should be included in the summary) by assigning score p(y i |s i , D, E , θ) quantifying the relevance of s i to the summary.",
"We assemble a summary S by selecting m sentences with top p(y i = 1|s i , D, E , θ) scores.",
"We formulate external information E as the sequence of the title and the image captions associated with the document.",
"We use the convolutional sentence encoder to get their sentence-level representations.",
"Answer Selection Given a question q and a document D , the goal of the task is to select one candidate sentence s i ∈ D in which the answer exists.",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i contains the answer) and assign score p(y i |s i , D, E , θ) quantifying s i 's relevance to the query.",
"We return as answer the sentence s i with the highest p(y i = 1|s i , D, E , θ) score.",
"We treat the question q as external information and use the convolutional sentence encoder to get its sentence-level representation.",
"This simplifies Eq.",
"(1) and (2) as follow: p(y t |s t , D, q) = softmax(g(h t , q)) (3) g(h t , q) = U o (V h h t + W q q), where V h and W q are network parameters.",
"We exploit the simplicity of our model to further assimilate external features relevant for answer selection: the inverse sentence frequency (ISF, (Trischler et al., 2016) ), the inverse document frequency (IDF) and a modified version of the ISF score which we call local ISF.",
"Trischler et al.",
"(2016) have shown that a simple ISF baseline (i.e., a sentence with the highest ISF score as an answer) correlates well with the answers.",
"The ISF score α s i for the sentence s i is computed as α s i = w∈s i ∩q IDF(w), where IDF is the inverse document frequency score of word w, defined as: IDF(w) = log N Nw , where N is the total number of sentences in the training set and N w is the number of sentences in which w appears.",
"Note that, s i ∩ q refers to the set of words that appear both in s i and in q.",
"Local ISF is calculated in the same manner as the ISF score, only with setting the total number of sentences (N ) to the number of sentences in the article that is being analyzed.",
"More formally, this modifies Eq.",
"(3) as follows: p(y t |s t , D, q) = softmax(g(h t , q, α t , β t , γ t )), (4) where α t , β t and γ t are the ISF, IDF and local ISF scores (real values) of sentence s t respectively .",
"The function g is calculated as follows: g(h t , q, α t , β t , γ t ) =U o (V h h t + W q q + W isf (α t · 1)+ W idf (β t · 1) + W lisf (γ t · 1) , where W isf , W idf and W lisf are new parameters added to the network and 1 is a vector of 1s of size equal to the sentence embedding size.",
"In Figure 1 , these external feature vectors are represented as 6-dimensional gray vectors accompanied with dashed arrows.",
"Experiments and Results This section presents our experimental setup and results assessing our model in both the extractive summarization and answer selection setups.",
"In the rest of the paper, we refer to our model as XNET for its ability to exploit eXternal information to improve document representation.",
"Extractive Document Summarization Summarization Dataset We evaluated our models on the CNN news highlights dataset (Hermann et al., 2015) .",
"2 We used the standard splits of Hermann et al.",
"(2015) for training, validation, and testing (90,266/1,220/1,093 documents).",
"We followed previous studies (Cheng and Lapata, 2016; Nallapati et al., 2016 Nallapati et al., , 2017 See et al., 2017; Tan and Wan, 2017) in assuming that the \"story highlights\" associated with each article are gold-standard abstractive summaries.",
"We trained our network on a named-entity-anonymized version of news articles.",
"However, we generated deanonymized summaries and evaluated them against gold summaries to facilitate human evaluation and to make human evaluation comparable to automatic evaluation.",
"To train our model, we need documents annotated with sentence extraction information, i.e., each sentence in a document is labeled with 1 (summary-worthy) or 0 (not summary-worthy).",
"We followed Nallapati et al.",
"(2017) and automatically extracted ground truth labels such that all positively labeled sentences from an article collectively give the highest ROUGE (Lin and Hovy, 2003) score with respect to the gold summary.",
"We used a modified script of Hermann et al.",
"(2015) to extract titles and image captions, and we associated them with the corresponding articles.",
"All articles get associated with their titles.",
"The availability of image captions varies from 0 to 414 per article, with an average of 3 image captions.",
"There are 40% CNN articles with at least one image caption.",
"All sentences, including titles and image captions, were padded with zeros to a sentence length of 100.",
"All input documents were padded with zeros to a maximum document length of 126.",
"For each document, we consider a maximum of 10 image captions.",
"We experimented with various numbers (1, 3, 5, 10 and 20) of image captions on the validation set and found that our model performed best with 10 image captions.",
"We refer the reader to the supplementary material for more implementation details to replicate our results.",
"Comparison Systems We compared the output of our model against the standard baseline of simply selecting the first three sentences from each document as the summary.",
"We refer to this baseline as LEAD in the rest of the paper.",
"We also compared our system against the sentence extraction system of Cheng and Lapata (2016) .",
"We refer to this system as POINTERNET as the neural attention architecture in Cheng and Lapata (2016) resembles the one of Pointer Networks .",
"3 It does not exploit any external information.",
"4 The architecture of POINTERNET is closely related to our model without external information.",
"4 Adding external information to POINTERNET is an in- Automatic Evaluation To automatically assess the quality of our summaries, we used ROUGE (Lin and Hovy, 2003) , a recall-oriented metric, to compare our model-generated summaries to manually-written highlights.",
"6 Previous work has reported ROUGE-1 (R1) and ROUGE-2 (R2) scores to access informativeness, and ROUGE-L (RL) to access fluency.",
"In addition to R1, R2 and RL, we also report ROUGE-3 (R3) and ROUGE-4 (R4) capturing higher order n-grams overlap to assess informativeness and fluency simultaneously.",
"teresting direction of research but we do not pursue it here.",
"It requires decoding with multiple types of attentions and this is not the focus of this paper.",
"5 We are unable to compare our results to the extractive system of Nallapati et al.",
"(2017) because they report their results on the DailyMail dataset and their code is not available.",
"The abstractive systems of Chen et al.",
"(2016) and Tan and Wan (2017) report their results on the CNN dataset, however, their results are not comparable to ours as they report on the full-length F1 variants of ROUGE to evaluate their abstractive summaries.",
"We report ROUGE recall scores which is more appropriate to evaluate our extractive summaries.",
"6 We used pyrouge, a Python package, to compute all our ROUGE scores with parameters \"-a -c 95 -m -n 4 -w 1.2.\"",
"We report our results on both full length (three sentences with the top scores as the summary) and fixed length (first 75 bytes and 275 bytes as the summary) summaries.",
"For full length summaries, our decision of selecting three sentences is guided by the fact that there are 3.11 sentences on average in the gold highlights of the training set.",
"We conduct our ablation study on the validation set with full length ROUGE scores, but we report both fixed and full length ROUGE scores for the test set.",
"We experimented with two types of external information: title (TITLE) and image captions (CAPTION).",
"In addition, we experimented with the first sentence (FS) of the document as external information.",
"Note that the latter is not external information, it is a sentence in the document.",
"However, we wanted to explore the idea that the first sentence of the document plays a crucial part in generating summaries (Rush et al., 2015; Nallapati et al., 2016) .",
"XNET with FS acts as a baseline for XNET with title and image captions.",
"We report the performance of several variants of XNET on the validation set in Table 1 .",
"We also compare them against the LEAD baseline and POINTERNET.",
"These two systems do not use any additional information.",
"Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET.",
"When the title (TITLE), image captions (CAPTION) and the first sentence (FS) are used separately as additional information, XNET performs best with TITLE as its external information.",
"Our result demonstrates the importance of the title of the document in extractive summarization (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) .",
"The performance with TITLE and CAP-TION is better than that with FS.",
"We also tried possible combinations of TITLE, CAPTION and FS.",
"All XNET models are superior to the ones without any external information.",
"XNET performs best when TITLE and CAPTION are jointly used as external information (55.4%, 21.8%, 11.8%, 7.5%, and 49.2% for R1, R2, R3, R4, and RL respectively).",
"It is better than the the LEAD baseline by 3.7 points on average and than POINTERNET by 1.8 points on average, indicating that external information is useful to identify the gist of the document.",
"We use this model for testing purposes.",
"Our final results on the test set are shown in to XNET.",
"This result could be because LEAD (always) and POINTERNET (often) include the first sentence in their summaries, whereas, XNET is better capable at selecting sentences from various document positions.",
"This is not captured by smaller summaries of 75 bytes, but it becomes more evident with longer summaries (275 bytes and full length) where XNET performs best across all ROUGE scores.",
"We note that POINTERNET outperforms LEAD for 75-byte summaries, then its performance drops behind LEAD for 275-byte summaries, but then it outperforms LEAD for full length summaries on the metrics R1, R2 and RL.",
"It shows that POINTERNET with its attention over sentences in the document is capable of exploring more than first few sentences in the document, but it is still behind XNET which is better at identifying salient sentences in the document.",
"XNET performs significantly better than POINTERNET by 0.8 points for 275-byte summaries and by 1.9 points for full length summaries, on average for all ROUGE scores.",
"Human Evaluation We complement our automatic evaluation results with human evaluation.",
"We randomly selected 20 articles from the test set.",
"Annotators were presented with a news article and summaries from four different systems.",
"These include the LEAD baseline, POINTERNET, XNET and the human authored highlights.",
"We followed the guidelines in Cheng and Lapata (2016) , and asked our participants to rank the summaries from best (1st) to worst (4th) in order of informativeness (does the summary capture important information in the article?)",
"and fluency (is the summary written in well-formed English?).",
"We did not allow any ties and we only sampled articles with nonidentical summaries.",
"We assigned this task to five annotators who were proficient English speakers.",
"Each annotator was presented with all 20 articles.",
"The order of summaries to rank was randomized per article.",
"An example of summaries our subjects ranked is provided in the supplementary material.",
"The results of our human evaluation study are shown in Table 3 .",
"As one might imagine, HUMAN gets ranked 1st most of the time (41%).",
"However, it is closely followed by XNET which ranked 1st 28% of the time.",
"In comparison, POINTER-NET and LEAD were mostly ranked at 3rd and 4th places.",
"We also carried out pairwise comparisons between all models in Table 3 for their statistical significance using a one-way ANOVA with post-hoc Tukey HSD tests with (p < 0.01).",
"It showed that XNET is significantly better than LEAD and POINTERNET, and it does not differ significantly from HUMAN.",
"On the other hand, POINTERNET does not differ significantly from LEAD and it differs significantly from both XNET and HUMAN.",
"The human evaluation results corroborates our empirical results in Table 1 and Table 2: XNET is better than LEAD and POINT-ERNET in producing informative and fluent summaries.",
"Answer Selection Question Answering Datasets We run experiments on four datasets collected for open domain question-answering tasks: WikiQA , SQuAD (Rajpurkar et al., 2016) , NewsQA (Trischler et al., 2016) , and MSMarco (Nguyen et al., 2016) .",
"NewsQA was especially designed to present lexical and syntactic divergence between questions and answers.",
"It contains 119,633 questions posed by crowdworkers on 12,744 CNN articles previously collected by Hermann et al.",
"(2015) .",
"In a similar manner, SQuAD associates 100,000+ question with a Wikipedia article's first paragraph, for 500+ previously chosen articles.",
"WikiQA was collected by mining web-searching query logs and then associating them with the summary section of the Wikipedia article presumed to be related to the topic of the query.",
"A similar collection procedure was followed to create MSMarco with the difference that each candidate answer is a whole paragraph from a different browsed website associated with the query.",
"We follow the widely used setup of leaving out unanswered questions (Trischler et al., 2016; and adapt the format of each dataset to our task of answer sentence selection by labeling a candidate sentence with 1 if any answer span is contained in that sentence.",
"In the case of MS-Marco, each candidate paragraph comes associated with a label, hence we treat each one as a single long sentence.",
"Since SQuAD keeps the official test dataset hidden and MSMarco does not provide labels for its released test set, we report results on their official validation sets.",
"For validation, we set apart 10% of each official training set.",
"Our dataset splits consist of 92, 525, 5, 165 and 5, 124 samples for NewsQA; 79, 032, 8, 567, and 10, 570 for SQuAD; 873, 122, and 79, 704, 9, 706, and 9, 650 for MSMarco, for training, validation, and testing respectively.",
"Comparison Systems We compared the output of our model against the ISF (Trischler et al., 2016) and LOCALISF baselines.",
"Given an article, the sentence with the highest ISF score is selected as an answer for the ISF baseline and the sentence with the highest local ISF score for the LOCALISF baseline.",
"We also compare our model against a neural network (PAIRCNN) that encodes (question, candidate) in an isolated manner as in previous work (Yin et al., 2016; .",
"The architecture uses the sentence encoder explained in earlier sections to learn the question and candidate representations.",
"The distribution over labels is given by p(y t |q) = p(y t |s t , q) = softmax(g(s t , q)) where g(s t , q) = ReLU(W sq · [s t ; q] + b sq ).",
"In addition, we also compare our model against AP-CNN (dos Santos et al., 2016) , ABCNN (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"We experiment with several variants of our model.",
"XNET is the vanilla version of our sen- SQuAD WikiQA NewsQA MSMarco ACC MAP MRR ACC MAP MRR ACC MAP MRR ACC MAP MRR WRD CNT 77.84 27.50 27.77 51.05 48.91 49.24 44.67 46.48 46.91 20.16 19.37 19.51 WGT WRD CNT 78.43 28.10 28.38 49.79 50.99 51.32 45.24 48.20 48.64 20.50 20.06 20.23 AP-CNN ----68.86 69.57 ------ABCNN ----69.21 71.08 ------L.D.C ----70.58 72.26 ------KV-MemNN ----70.69 72.65 ------LOCALISF 79.50 27.78 28.05 49.79 49.57 50.11 44.69 48.40 46.48 20.21 20.22 20.39 ISF 78.85 28.09 28.36 48.52 46.53 46.72 45.61 48.57 48.99 20.52 20.07 20.23 PAIRCNN 32.53 46.34 46.35 32.49 39.87 38.71 (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"(WGT) WRD CNT stands for the (weighted) word count baseline.",
"See text for more details.",
"tence extractor conditioned only on the query q as external information (Eq.",
"(3) ).",
"XNET+ is an extension of XNET which uses ISF, IDF and local ISF scores in addition to the query q as external information (Eqn.",
"(4) ).",
"We also experimented with a baseline XNETTOPK where we choose the top k sentences with highest ISF score, and then among them choose the one with the highest probability according to XNET.",
"In our experiments, we set k = 5.",
"In the end, we experimented with an ensemble network LRXNET which combines the XNET score, the COMPAGGR score and other word-overlap-based scores (tweaked and optimized for each dataset separately) for each sentence using a logistic regression classifier.",
"It uses ISF and LocalISF scores for NewsQA, IDF and ISF scores for SQuAD, sentence length, IDF and ISF scores for WikiQA, and word overlap and ISF score for MSMarco.",
"We refer the reader to the supplementary material for more implementation and optimization details to replicate our results.",
"Evaluation Metrics We consider metrics that evaluate systems that return a ranked list of candidate answers: mean average precision (MAP), mean reciprocal rank (MRR), and accuracy (ACC).",
"Results Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco.",
"Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate in isolation.",
"Secondly, we can observe that ISF is indeed a strong baseline that outperforms XNET.",
"This means that just \"reading\" the document using a vanilla version of XNET is not sufficient, and help is required through a coarse filtering.",
"Indeed, we observe that XNET+ outperforms all baselines except for COMPAGGR.",
"Our ensemble model LRXNET can ultimately surpass COMPAGGR on majority of the datasets.",
"This consistent behavior validates the machine reading capabilities and the improved document representation with external features of our model for answer selection.",
"Specifically, the combination of document reading and word overlap features is required to be done in a soft manner, using a classification technique.",
"Using it as a hard constraint, with XNETTOPK, does not achieve the best result.",
"We believe that often the ISF score is a better indicator of answer presence in the vicinity of certain candidate instead of in the candidate itself.",
"As such, XNET+ is capable of using this feature in datasets with richer context.",
"It is worth noting that the improvement gained by LRXNET over the state-of-the-art follows a pattern.",
"For the SQuAD dataset, the results are comparable (less than 1%).",
"However, the improvement for WikiQA reaches ∼3% and then the gap shrinks again for NewsQA, with an improvement of ∼1%.",
"This could be explained by the fact that each sample of the SQuAD is a paragraph, compared to an article summary for WikiQA, and to an entire article for NewsQA.",
"Hence, we further strengthen our hypothesis that a richer context is needed to achieve better results, in this case expressed as document length, but as the length of the context increases the limitation of sequential models to learn from long rich sequences arises.",
"7 Interestingly, our model lags behind COM-PAGGR on the MSMarco dataset.",
"It turns out this is due to contextual independence between candidates in the MSMarco dataset, i.e., each candidate is a stand-alone paragraph in this dataset, in contrast to contextually dependent candidate sentences from a document in the NewsQA, SQuAD and WikiQA datasets.",
"As a result, our models (XNET+ and LRXNET) with document reading abilities perform poorly.",
"This can be observed by the fact that XNET and PAIRCNN obtain comparable results.",
"COMPAGGR performs better because comparing each candidate independently is a better strategy.",
"Conclusion We describe an approach to model documents while incorporating external information that informs the representations learned for the sentences in the document.",
"We implement our approach through an attention mechanism of a neural network architecture for modeling documents.",
"Our experiments with extractive document summarization and answer selection tasks validates our model in two ways: first, we demonstrate that external information is important to guide document modeling for natural language understanding tasks.",
"Our model uses image captions and the title of the document for document summarization, and the query with word overlap features for answer selection and outperforms its counterparts that do not use this information.",
"Second, our external attention mechanism successfully guides the learning of the document representation for the relevant end goal.",
"For answer selection, we show that inserting the query with word overlap features using our external attention mechanism outperforms state-of-the-art systems that naturally also have access to this information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Document Modeling For Sentence Extraction",
"Sentence Extraction Applications",
"Experiments and Results",
"Extractive Document Summarization",
"Answer Selection",
"Conclusion"
]
} | GEM-SciDuet-train-124#paper-1342#slide-10 | XNet for QA Selection | qa ISF IDF External attention
inverse document frequency (IDF)
inverse sentence frequency (ISF; Trischler
local ISF (ISF with considering no of sents in article) | qa ISF IDF External attention
inverse document frequency (IDF)
inverse sentence frequency (ISF; Trischler
local ISF (ISF with considering no of sents in article) | [] |
GEM-SciDuet-train-124#paper-1342#slide-11 | 1342 | Document Modeling with External Attention for Sentence Extraction | Document modeling is essential to a variety of natural language understanding tasks. We propose to use external information to improve document modeling for problems that can be framed as sentence extraction. We develop a framework composed of a hierarchical document encoder and an attention-based extractor with attention over external information. We evaluate our model on extractive document summarization (where the external information is image captions and the title of the document) and answer selection (where the external information is a question). We show that our model consistently outperforms strong baselines, in terms of both informativeness and fluency (for CNN document summarization) and achieves state-of-the-art results for answer selection on WikiQA and NewsQA. 1 * The first three authors made equal contributions to this paper. The work was done when the second author was visiting Edinburgh. 1 Our TensorFlow code and datasets are publicly available at https://github.com/shashiongithub/ Document-Models-with-Ext-Information. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233
],
"paper_content_text": [
"Introduction Recurrent neural networks have become one of the most widely used models in natural language processing (NLP) .",
"A number of variants of RNNs such as Long Short-Term Memory networks (LSTM; Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit networks (GRU; Cho et al., 2014) have been designed to model text capturing long-term dependencies in problems such as language modeling.",
"However, document modeling, a key to many natural language understanding tasks, is still an open challenge.",
"Recently, some neural network architectures were proposed to capture large context for modeling text (Mikolov and Zweig, 2012; Ghosh et al., 2016; Ji et al., 2015; Wang and Cho, 2016) .",
"Lin et al.",
"(2015) and Yang et al.",
"(2016) proposed a hierarchical RNN network for document-level modeling as well as sentence-level modeling, at the cost of increased computational complexity.",
"Tran et al.",
"(2016) further proposed a contextual language model that considers information at interdocument level.",
"It is challenging to rely only on the document for its understanding, and as such it is not surprising that these models struggle on problems such as document summarization (Cheng and Lapata, 2016; Chen et al., 2016; Nallapati et al., 2017; See et al., 2017; Tan and Wan, 2017) and machine reading comprehension (Trischler et al., 2016; Miller et al., 2016; Weissenborn et al., 2017; Hu et al., 2017; .",
"In this paper, we formalize the use of external information to further guide document modeling for end goals.",
"We present a simple yet effective document modeling framework for sentence extraction that allows machine reading with \"external attention.\"",
"Our model includes a neural hierarchical document encoder (or a machine reader) and a hierarchical attention-based sentence extractor.",
"Our hierarchical document encoder resembles the architectures proposed by Cheng and Lapata (2016) and Narayan et al.",
"(2018) in that it derives the document meaning representation from its sentences and their constituent words.",
"Our novel sentence extractor combines this document meaning representation with an attention mechanism (Bahdanau et al., 2015) over the external information to label sentences from the input document.",
"Our model explicitly biases the extractor with external cues and implicitly biases the encoder through training.",
"We demonstrate the effectiveness of our model on two problems that can be naturally framed as sentence extraction with external information.",
"These two problems, extractive document summarization and answer selection for machine reading comprehension, both require local and global contextual reasoning about a given document.",
"Extractive document summarization systems aim at creating a summary by identifying (and subsequently concatenating) the most important sentences in a document, whereas answer selection systems select the candidate sentence in a document most likely to contain the answer to a query.",
"For document summarization, we exploit the title and image captions which often appear with documents (specifically newswire articles) as external information.",
"For answer selection, we use word overlap features, such as the inverse sentence frequency (ISF, Trischler et al., 2016) and the inverse document frequency (IDF) together with the query, all formulated as external cues.",
"Our main contributions are three-fold: First, our model ensures that sentence extraction is done in a larger (rich) context, i.e., the full document is read first before we start labeling its sentences for extraction, and each sentence labeling is done by implicitly estimating its local and global relevance to the document and by directly attending to some external information for importance cues.",
"Second, while external information has been shown to be useful for summarization systems using traditional hand-crafted features (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) , our model is the first to exploit such information in deep learning-based summarization.",
"We evaluate our models automatically (in terms of ROUGE scores) on the CNN news highlights dataset (Hermann et al., 2015) .",
"Experimental results show that our summarizer, informed with title and image captions, consistently outperforms summarizers that do not use this information.",
"We also conduct a human evaluation to judge which type of summary participants prefer.",
"Our results overwhelmingly show that human subjects find our summaries more informative and complete.",
"Lastly, with the machine reading capabilities of our model, we confirm that a full document needs to be \"read\" to produce high quality extracts allowing a rich contextual reasoning, in contrast to previous answer selection approaches that often measure a score between each sentence in the document and the question and return the sentence with highest score in an isolated manner (Yin et al., 2016; .",
"Our model with ISF and IDF scores as external features achieves competitive results for answer selection.",
"Our ensemble model combining scores from our model and word overlap scores using a logistic regression layer achieves state-ofthe-art results on the popular question answering datasets WikiQA and NewsQA (Trischler et al., 2016) , and it obtains comparable results to the state of the art for SQuAD (Rajpurkar et al., 2016) .",
"We also evaluate our approach on the MSMarco dataset (Nguyen et al., 2016) and elaborate on the behavior of our machine reader in a scenario where each candidate answer sentence is contextually independent of each other.",
"Document Modeling For Sentence Extraction Given a document D consisting of a sequence of n sentences (s 1 , s 2 , ..., s n ) , we aim at labeling each sentence s i in D with a label y i ∈ {0, 1} where y i = 1 indicates that s i is extraction-worthy and 0 otherwise.",
"Our architecture resembles those previously proposed in the literature (Cheng and Lapata, 2016; Nallapati et al., 2017) .",
"The main components include a sentence encoder, a document encoder, and a novel sentence extractor (see Figure 1) that we describe in more detail below.",
"The novel characteristics of our model are that each sentence is labeled by implicitly estimating its (local and global) relevance to the document and by directly attending to some external information for importance cues.",
"Sentence Encoder A core component of our model is a convolutional sentence encoder (Kim, 2014; Kim et al., 2016) which encodes sentences into continuous representations.",
"We use temporal narrow convolution by applying a kernel filter K of width h to a window of h words in sentence s to produce a new feature.",
"This filter is applied to each possible window of words in s to produce a feature map f ∈ R k−h+1 where k is the sentence length.",
"We then apply max-pooling over time over the feature map f and take the maximum value as the feature corresponding to this particular filter K. We use multiple kernels of various sizes and each kernel multiple times to construct the representation of a sentence.",
"In Figure 1 , ker-nels of size 2 (red) and 4 (blue) are applied three times each.",
"The max-pooling over time operation yields two feature lists f K 2 and f K 4 ∈ R 3 .",
"The final sentence embeddings have six dimensions.",
"Document Encoder The document encoder composes a sequence of sentences to obtain a document representation.",
"We use a recurrent neural network with LSTM cells to avoid the vanishing gradient problem when training long sequences (Hochreiter and Schmidhuber, 1997) .",
"Given a document D consisting of a sequence of sentences (s 1 , s 2 , .",
".",
".",
", s n ), we follow common practice and feed the sentences in reverse order (Sutskever et al., 2014; Filippova et al., 2015) .",
"Sentence Extractor Our sentence extractor sequentially labels each sentence in a document with 1 or 0 by implicitly estimating its relevance in the document and by directly attending to the external information for importance cues.",
"It is implemented with another RNN with LSTM cells with an attention mechanism (Bahdanau et al., 2015) and a softmax layer.",
"Our attention mechanism differs from the standard practice of attending intermediate states of the input (encoder).",
"Instead, our extractor attends to a sequence of p pieces of external information E : (e 1 , e 2 , ..., e p ) relevant for the task (e.g., e i is a title or an image caption for summarization) for cues.",
"At time t i , it reads sentence s i and makes a binary prediction, conditioned on the document representation (obtained from the document encoder), the previously labeled sentences and the external information.",
"This way, our labeler is able to identify locally and globally important sentences within the document which correlate well with the external information.",
"Given sentence s t at time step t, it returns a probability distribution over labels as: p(y t |s t , D, E) = softmax(g(h t , h t )) (1) g(h t , h t ) = U o (V h h t + W h h t ) (2) h t = LSTM(s t , h t−1 ) h t = p i=1 α (t,i) e i , where α (t,i) = exp(h t e i ) j exp(h t e j ) where g(·) is a single-layer neural network with parameters U o , V h and W h .",
"h t is an intermedi- Figure 1 : Hierarchical encoder-decoder model for sentence extraction with external attention.",
"s 1 , .",
".",
".",
", s 5 are sentences in the document and, e 1 , e 2 and e 3 represent external information.",
"For the extractive summarization task, e i s are external information such as title and image captions.",
"For the answers selection task, e i s are the query and word overlap features.",
"ate RNN state at time step t. The dynamic context vector h t is essentially the weighted sum of the external information (e 1 , e 2 , .",
".",
".",
", e p ).",
"Figure 1 summarizes our model.",
"Sentence Extraction Applications We validate our model on two sentence extraction problems: extractive document summarization and answer selection for machine reading comprehension.",
"Both these tasks require local and global contextual reasoning about a given document.",
"As such, they test the ability of our model to facilitate document modeling using external information.",
"Extractive Summarization An extractive summarizer aims to produce a summary S by selecting m sentences from D (where m < n).",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i should be included in the summary) by assigning score p(y i |s i , D, E , θ) quantifying the relevance of s i to the summary.",
"We assemble a summary S by selecting m sentences with top p(y i = 1|s i , D, E , θ) scores.",
"We formulate external information E as the sequence of the title and the image captions associated with the document.",
"We use the convolutional sentence encoder to get their sentence-level representations.",
"Answer Selection Given a question q and a document D , the goal of the task is to select one candidate sentence s i ∈ D in which the answer exists.",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i contains the answer) and assign score p(y i |s i , D, E , θ) quantifying s i 's relevance to the query.",
"We return as answer the sentence s i with the highest p(y i = 1|s i , D, E , θ) score.",
"We treat the question q as external information and use the convolutional sentence encoder to get its sentence-level representation.",
"This simplifies Eq.",
"(1) and (2) as follow: p(y t |s t , D, q) = softmax(g(h t , q)) (3) g(h t , q) = U o (V h h t + W q q), where V h and W q are network parameters.",
"We exploit the simplicity of our model to further assimilate external features relevant for answer selection: the inverse sentence frequency (ISF, (Trischler et al., 2016) ), the inverse document frequency (IDF) and a modified version of the ISF score which we call local ISF.",
"Trischler et al.",
"(2016) have shown that a simple ISF baseline (i.e., a sentence with the highest ISF score as an answer) correlates well with the answers.",
"The ISF score α s i for the sentence s i is computed as α s i = w∈s i ∩q IDF(w), where IDF is the inverse document frequency score of word w, defined as: IDF(w) = log N Nw , where N is the total number of sentences in the training set and N w is the number of sentences in which w appears.",
"Note that, s i ∩ q refers to the set of words that appear both in s i and in q.",
"Local ISF is calculated in the same manner as the ISF score, only with setting the total number of sentences (N ) to the number of sentences in the article that is being analyzed.",
"More formally, this modifies Eq.",
"(3) as follows: p(y t |s t , D, q) = softmax(g(h t , q, α t , β t , γ t )), (4) where α t , β t and γ t are the ISF, IDF and local ISF scores (real values) of sentence s t respectively .",
"The function g is calculated as follows: g(h t , q, α t , β t , γ t ) =U o (V h h t + W q q + W isf (α t · 1)+ W idf (β t · 1) + W lisf (γ t · 1) , where W isf , W idf and W lisf are new parameters added to the network and 1 is a vector of 1s of size equal to the sentence embedding size.",
"In Figure 1 , these external feature vectors are represented as 6-dimensional gray vectors accompanied with dashed arrows.",
"Experiments and Results This section presents our experimental setup and results assessing our model in both the extractive summarization and answer selection setups.",
"In the rest of the paper, we refer to our model as XNET for its ability to exploit eXternal information to improve document representation.",
"Extractive Document Summarization Summarization Dataset We evaluated our models on the CNN news highlights dataset (Hermann et al., 2015) .",
"2 We used the standard splits of Hermann et al.",
"(2015) for training, validation, and testing (90,266/1,220/1,093 documents).",
"We followed previous studies (Cheng and Lapata, 2016; Nallapati et al., 2016 Nallapati et al., , 2017 See et al., 2017; Tan and Wan, 2017) in assuming that the \"story highlights\" associated with each article are gold-standard abstractive summaries.",
"We trained our network on a named-entity-anonymized version of news articles.",
"However, we generated deanonymized summaries and evaluated them against gold summaries to facilitate human evaluation and to make human evaluation comparable to automatic evaluation.",
"To train our model, we need documents annotated with sentence extraction information, i.e., each sentence in a document is labeled with 1 (summary-worthy) or 0 (not summary-worthy).",
"We followed Nallapati et al.",
"(2017) and automatically extracted ground truth labels such that all positively labeled sentences from an article collectively give the highest ROUGE (Lin and Hovy, 2003) score with respect to the gold summary.",
"We used a modified script of Hermann et al.",
"(2015) to extract titles and image captions, and we associated them with the corresponding articles.",
"All articles get associated with their titles.",
"The availability of image captions varies from 0 to 414 per article, with an average of 3 image captions.",
"There are 40% CNN articles with at least one image caption.",
"All sentences, including titles and image captions, were padded with zeros to a sentence length of 100.",
"All input documents were padded with zeros to a maximum document length of 126.",
"For each document, we consider a maximum of 10 image captions.",
"We experimented with various numbers (1, 3, 5, 10 and 20) of image captions on the validation set and found that our model performed best with 10 image captions.",
"We refer the reader to the supplementary material for more implementation details to replicate our results.",
"Comparison Systems We compared the output of our model against the standard baseline of simply selecting the first three sentences from each document as the summary.",
"We refer to this baseline as LEAD in the rest of the paper.",
"We also compared our system against the sentence extraction system of Cheng and Lapata (2016) .",
"We refer to this system as POINTERNET as the neural attention architecture in Cheng and Lapata (2016) resembles the one of Pointer Networks .",
"3 It does not exploit any external information.",
"4 The architecture of POINTERNET is closely related to our model without external information.",
"4 Adding external information to POINTERNET is an in- Automatic Evaluation To automatically assess the quality of our summaries, we used ROUGE (Lin and Hovy, 2003) , a recall-oriented metric, to compare our model-generated summaries to manually-written highlights.",
"6 Previous work has reported ROUGE-1 (R1) and ROUGE-2 (R2) scores to access informativeness, and ROUGE-L (RL) to access fluency.",
"In addition to R1, R2 and RL, we also report ROUGE-3 (R3) and ROUGE-4 (R4) capturing higher order n-grams overlap to assess informativeness and fluency simultaneously.",
"teresting direction of research but we do not pursue it here.",
"It requires decoding with multiple types of attentions and this is not the focus of this paper.",
"5 We are unable to compare our results to the extractive system of Nallapati et al.",
"(2017) because they report their results on the DailyMail dataset and their code is not available.",
"The abstractive systems of Chen et al.",
"(2016) and Tan and Wan (2017) report their results on the CNN dataset, however, their results are not comparable to ours as they report on the full-length F1 variants of ROUGE to evaluate their abstractive summaries.",
"We report ROUGE recall scores which is more appropriate to evaluate our extractive summaries.",
"6 We used pyrouge, a Python package, to compute all our ROUGE scores with parameters \"-a -c 95 -m -n 4 -w 1.2.\"",
"We report our results on both full length (three sentences with the top scores as the summary) and fixed length (first 75 bytes and 275 bytes as the summary) summaries.",
"For full length summaries, our decision of selecting three sentences is guided by the fact that there are 3.11 sentences on average in the gold highlights of the training set.",
"We conduct our ablation study on the validation set with full length ROUGE scores, but we report both fixed and full length ROUGE scores for the test set.",
"We experimented with two types of external information: title (TITLE) and image captions (CAPTION).",
"In addition, we experimented with the first sentence (FS) of the document as external information.",
"Note that the latter is not external information, it is a sentence in the document.",
"However, we wanted to explore the idea that the first sentence of the document plays a crucial part in generating summaries (Rush et al., 2015; Nallapati et al., 2016) .",
"XNET with FS acts as a baseline for XNET with title and image captions.",
"We report the performance of several variants of XNET on the validation set in Table 1 .",
"We also compare them against the LEAD baseline and POINTERNET.",
"These two systems do not use any additional information.",
"Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET.",
"When the title (TITLE), image captions (CAPTION) and the first sentence (FS) are used separately as additional information, XNET performs best with TITLE as its external information.",
"Our result demonstrates the importance of the title of the document in extractive summarization (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) .",
"The performance with TITLE and CAP-TION is better than that with FS.",
"We also tried possible combinations of TITLE, CAPTION and FS.",
"All XNET models are superior to the ones without any external information.",
"XNET performs best when TITLE and CAPTION are jointly used as external information (55.4%, 21.8%, 11.8%, 7.5%, and 49.2% for R1, R2, R3, R4, and RL respectively).",
"It is better than the the LEAD baseline by 3.7 points on average and than POINTERNET by 1.8 points on average, indicating that external information is useful to identify the gist of the document.",
"We use this model for testing purposes.",
"Our final results on the test set are shown in to XNET.",
"This result could be because LEAD (always) and POINTERNET (often) include the first sentence in their summaries, whereas, XNET is better capable at selecting sentences from various document positions.",
"This is not captured by smaller summaries of 75 bytes, but it becomes more evident with longer summaries (275 bytes and full length) where XNET performs best across all ROUGE scores.",
"We note that POINTERNET outperforms LEAD for 75-byte summaries, then its performance drops behind LEAD for 275-byte summaries, but then it outperforms LEAD for full length summaries on the metrics R1, R2 and RL.",
"It shows that POINTERNET with its attention over sentences in the document is capable of exploring more than first few sentences in the document, but it is still behind XNET which is better at identifying salient sentences in the document.",
"XNET performs significantly better than POINTERNET by 0.8 points for 275-byte summaries and by 1.9 points for full length summaries, on average for all ROUGE scores.",
"Human Evaluation We complement our automatic evaluation results with human evaluation.",
"We randomly selected 20 articles from the test set.",
"Annotators were presented with a news article and summaries from four different systems.",
"These include the LEAD baseline, POINTERNET, XNET and the human authored highlights.",
"We followed the guidelines in Cheng and Lapata (2016) , and asked our participants to rank the summaries from best (1st) to worst (4th) in order of informativeness (does the summary capture important information in the article?)",
"and fluency (is the summary written in well-formed English?).",
"We did not allow any ties and we only sampled articles with nonidentical summaries.",
"We assigned this task to five annotators who were proficient English speakers.",
"Each annotator was presented with all 20 articles.",
"The order of summaries to rank was randomized per article.",
"An example of summaries our subjects ranked is provided in the supplementary material.",
"The results of our human evaluation study are shown in Table 3 .",
"As one might imagine, HUMAN gets ranked 1st most of the time (41%).",
"However, it is closely followed by XNET which ranked 1st 28% of the time.",
"In comparison, POINTER-NET and LEAD were mostly ranked at 3rd and 4th places.",
"We also carried out pairwise comparisons between all models in Table 3 for their statistical significance using a one-way ANOVA with post-hoc Tukey HSD tests with (p < 0.01).",
"It showed that XNET is significantly better than LEAD and POINTERNET, and it does not differ significantly from HUMAN.",
"On the other hand, POINTERNET does not differ significantly from LEAD and it differs significantly from both XNET and HUMAN.",
"The human evaluation results corroborates our empirical results in Table 1 and Table 2: XNET is better than LEAD and POINT-ERNET in producing informative and fluent summaries.",
"Answer Selection Question Answering Datasets We run experiments on four datasets collected for open domain question-answering tasks: WikiQA , SQuAD (Rajpurkar et al., 2016) , NewsQA (Trischler et al., 2016) , and MSMarco (Nguyen et al., 2016) .",
"NewsQA was especially designed to present lexical and syntactic divergence between questions and answers.",
"It contains 119,633 questions posed by crowdworkers on 12,744 CNN articles previously collected by Hermann et al.",
"(2015) .",
"In a similar manner, SQuAD associates 100,000+ question with a Wikipedia article's first paragraph, for 500+ previously chosen articles.",
"WikiQA was collected by mining web-searching query logs and then associating them with the summary section of the Wikipedia article presumed to be related to the topic of the query.",
"A similar collection procedure was followed to create MSMarco with the difference that each candidate answer is a whole paragraph from a different browsed website associated with the query.",
"We follow the widely used setup of leaving out unanswered questions (Trischler et al., 2016; and adapt the format of each dataset to our task of answer sentence selection by labeling a candidate sentence with 1 if any answer span is contained in that sentence.",
"In the case of MS-Marco, each candidate paragraph comes associated with a label, hence we treat each one as a single long sentence.",
"Since SQuAD keeps the official test dataset hidden and MSMarco does not provide labels for its released test set, we report results on their official validation sets.",
"For validation, we set apart 10% of each official training set.",
"Our dataset splits consist of 92, 525, 5, 165 and 5, 124 samples for NewsQA; 79, 032, 8, 567, and 10, 570 for SQuAD; 873, 122, and 79, 704, 9, 706, and 9, 650 for MSMarco, for training, validation, and testing respectively.",
"Comparison Systems We compared the output of our model against the ISF (Trischler et al., 2016) and LOCALISF baselines.",
"Given an article, the sentence with the highest ISF score is selected as an answer for the ISF baseline and the sentence with the highest local ISF score for the LOCALISF baseline.",
"We also compare our model against a neural network (PAIRCNN) that encodes (question, candidate) in an isolated manner as in previous work (Yin et al., 2016; .",
"The architecture uses the sentence encoder explained in earlier sections to learn the question and candidate representations.",
"The distribution over labels is given by p(y t |q) = p(y t |s t , q) = softmax(g(s t , q)) where g(s t , q) = ReLU(W sq · [s t ; q] + b sq ).",
"In addition, we also compare our model against AP-CNN (dos Santos et al., 2016) , ABCNN (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"We experiment with several variants of our model.",
"XNET is the vanilla version of our sen- SQuAD WikiQA NewsQA MSMarco ACC MAP MRR ACC MAP MRR ACC MAP MRR ACC MAP MRR WRD CNT 77.84 27.50 27.77 51.05 48.91 49.24 44.67 46.48 46.91 20.16 19.37 19.51 WGT WRD CNT 78.43 28.10 28.38 49.79 50.99 51.32 45.24 48.20 48.64 20.50 20.06 20.23 AP-CNN ----68.86 69.57 ------ABCNN ----69.21 71.08 ------L.D.C ----70.58 72.26 ------KV-MemNN ----70.69 72.65 ------LOCALISF 79.50 27.78 28.05 49.79 49.57 50.11 44.69 48.40 46.48 20.21 20.22 20.39 ISF 78.85 28.09 28.36 48.52 46.53 46.72 45.61 48.57 48.99 20.52 20.07 20.23 PAIRCNN 32.53 46.34 46.35 32.49 39.87 38.71 (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"(WGT) WRD CNT stands for the (weighted) word count baseline.",
"See text for more details.",
"tence extractor conditioned only on the query q as external information (Eq.",
"(3) ).",
"XNET+ is an extension of XNET which uses ISF, IDF and local ISF scores in addition to the query q as external information (Eqn.",
"(4) ).",
"We also experimented with a baseline XNETTOPK where we choose the top k sentences with highest ISF score, and then among them choose the one with the highest probability according to XNET.",
"In our experiments, we set k = 5.",
"In the end, we experimented with an ensemble network LRXNET which combines the XNET score, the COMPAGGR score and other word-overlap-based scores (tweaked and optimized for each dataset separately) for each sentence using a logistic regression classifier.",
"It uses ISF and LocalISF scores for NewsQA, IDF and ISF scores for SQuAD, sentence length, IDF and ISF scores for WikiQA, and word overlap and ISF score for MSMarco.",
"We refer the reader to the supplementary material for more implementation and optimization details to replicate our results.",
"Evaluation Metrics We consider metrics that evaluate systems that return a ranked list of candidate answers: mean average precision (MAP), mean reciprocal rank (MRR), and accuracy (ACC).",
"Results Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco.",
"Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate in isolation.",
"Secondly, we can observe that ISF is indeed a strong baseline that outperforms XNET.",
"This means that just \"reading\" the document using a vanilla version of XNET is not sufficient, and help is required through a coarse filtering.",
"Indeed, we observe that XNET+ outperforms all baselines except for COMPAGGR.",
"Our ensemble model LRXNET can ultimately surpass COMPAGGR on majority of the datasets.",
"This consistent behavior validates the machine reading capabilities and the improved document representation with external features of our model for answer selection.",
"Specifically, the combination of document reading and word overlap features is required to be done in a soft manner, using a classification technique.",
"Using it as a hard constraint, with XNETTOPK, does not achieve the best result.",
"We believe that often the ISF score is a better indicator of answer presence in the vicinity of certain candidate instead of in the candidate itself.",
"As such, XNET+ is capable of using this feature in datasets with richer context.",
"It is worth noting that the improvement gained by LRXNET over the state-of-the-art follows a pattern.",
"For the SQuAD dataset, the results are comparable (less than 1%).",
"However, the improvement for WikiQA reaches ∼3% and then the gap shrinks again for NewsQA, with an improvement of ∼1%.",
"This could be explained by the fact that each sample of the SQuAD is a paragraph, compared to an article summary for WikiQA, and to an entire article for NewsQA.",
"Hence, we further strengthen our hypothesis that a richer context is needed to achieve better results, in this case expressed as document length, but as the length of the context increases the limitation of sequential models to learn from long rich sequences arises.",
"7 Interestingly, our model lags behind COM-PAGGR on the MSMarco dataset.",
"It turns out this is due to contextual independence between candidates in the MSMarco dataset, i.e., each candidate is a stand-alone paragraph in this dataset, in contrast to contextually dependent candidate sentences from a document in the NewsQA, SQuAD and WikiQA datasets.",
"As a result, our models (XNET+ and LRXNET) with document reading abilities perform poorly.",
"This can be observed by the fact that XNET and PAIRCNN obtain comparable results.",
"COMPAGGR performs better because comparing each candidate independently is a better strategy.",
"Conclusion We describe an approach to model documents while incorporating external information that informs the representations learned for the sentences in the document.",
"We implement our approach through an attention mechanism of a neural network architecture for modeling documents.",
"Our experiments with extractive document summarization and answer selection tasks validates our model in two ways: first, we demonstrate that external information is important to guide document modeling for natural language understanding tasks.",
"Our model uses image captions and the title of the document for document summarization, and the query with word overlap features for answer selection and outperforms its counterparts that do not use this information.",
"Second, our external attention mechanism successfully guides the learning of the document representation for the relevant end goal.",
"For answer selection, we show that inserting the query with word overlap features using our external attention mechanism outperforms state-of-the-art systems that naturally also have access to this information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Document Modeling For Sentence Extraction",
"Sentence Extraction Applications",
"Experiments and Results",
"Extractive Document Summarization",
"Answer Selection",
"Conclusion"
]
} | GEM-SciDuet-train-124#paper-1342#slide-11 | QA Selection Experiments | leave out unanswered questions
report scores for accuracy, Mean Average Precision
(MAP) and Mean Reciprocal Rank (MRR) | leave out unanswered questions
report scores for accuracy, Mean Average Precision
(MAP) and Mean Reciprocal Rank (MRR) | [] |
GEM-SciDuet-train-124#paper-1342#slide-12 | 1342 | Document Modeling with External Attention for Sentence Extraction | Document modeling is essential to a variety of natural language understanding tasks. We propose to use external information to improve document modeling for problems that can be framed as sentence extraction. We develop a framework composed of a hierarchical document encoder and an attention-based extractor with attention over external information. We evaluate our model on extractive document summarization (where the external information is image captions and the title of the document) and answer selection (where the external information is a question). We show that our model consistently outperforms strong baselines, in terms of both informativeness and fluency (for CNN document summarization) and achieves state-of-the-art results for answer selection on WikiQA and NewsQA. 1 * The first three authors made equal contributions to this paper. The work was done when the second author was visiting Edinburgh. 1 Our TensorFlow code and datasets are publicly available at https://github.com/shashiongithub/ Document-Models-with-Ext-Information. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233
],
"paper_content_text": [
"Introduction Recurrent neural networks have become one of the most widely used models in natural language processing (NLP) .",
"A number of variants of RNNs such as Long Short-Term Memory networks (LSTM; Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit networks (GRU; Cho et al., 2014) have been designed to model text capturing long-term dependencies in problems such as language modeling.",
"However, document modeling, a key to many natural language understanding tasks, is still an open challenge.",
"Recently, some neural network architectures were proposed to capture large context for modeling text (Mikolov and Zweig, 2012; Ghosh et al., 2016; Ji et al., 2015; Wang and Cho, 2016) .",
"Lin et al.",
"(2015) and Yang et al.",
"(2016) proposed a hierarchical RNN network for document-level modeling as well as sentence-level modeling, at the cost of increased computational complexity.",
"Tran et al.",
"(2016) further proposed a contextual language model that considers information at interdocument level.",
"It is challenging to rely only on the document for its understanding, and as such it is not surprising that these models struggle on problems such as document summarization (Cheng and Lapata, 2016; Chen et al., 2016; Nallapati et al., 2017; See et al., 2017; Tan and Wan, 2017) and machine reading comprehension (Trischler et al., 2016; Miller et al., 2016; Weissenborn et al., 2017; Hu et al., 2017; .",
"In this paper, we formalize the use of external information to further guide document modeling for end goals.",
"We present a simple yet effective document modeling framework for sentence extraction that allows machine reading with \"external attention.\"",
"Our model includes a neural hierarchical document encoder (or a machine reader) and a hierarchical attention-based sentence extractor.",
"Our hierarchical document encoder resembles the architectures proposed by Cheng and Lapata (2016) and Narayan et al.",
"(2018) in that it derives the document meaning representation from its sentences and their constituent words.",
"Our novel sentence extractor combines this document meaning representation with an attention mechanism (Bahdanau et al., 2015) over the external information to label sentences from the input document.",
"Our model explicitly biases the extractor with external cues and implicitly biases the encoder through training.",
"We demonstrate the effectiveness of our model on two problems that can be naturally framed as sentence extraction with external information.",
"These two problems, extractive document summarization and answer selection for machine reading comprehension, both require local and global contextual reasoning about a given document.",
"Extractive document summarization systems aim at creating a summary by identifying (and subsequently concatenating) the most important sentences in a document, whereas answer selection systems select the candidate sentence in a document most likely to contain the answer to a query.",
"For document summarization, we exploit the title and image captions which often appear with documents (specifically newswire articles) as external information.",
"For answer selection, we use word overlap features, such as the inverse sentence frequency (ISF, Trischler et al., 2016) and the inverse document frequency (IDF) together with the query, all formulated as external cues.",
"Our main contributions are three-fold: First, our model ensures that sentence extraction is done in a larger (rich) context, i.e., the full document is read first before we start labeling its sentences for extraction, and each sentence labeling is done by implicitly estimating its local and global relevance to the document and by directly attending to some external information for importance cues.",
"Second, while external information has been shown to be useful for summarization systems using traditional hand-crafted features (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) , our model is the first to exploit such information in deep learning-based summarization.",
"We evaluate our models automatically (in terms of ROUGE scores) on the CNN news highlights dataset (Hermann et al., 2015) .",
"Experimental results show that our summarizer, informed with title and image captions, consistently outperforms summarizers that do not use this information.",
"We also conduct a human evaluation to judge which type of summary participants prefer.",
"Our results overwhelmingly show that human subjects find our summaries more informative and complete.",
"Lastly, with the machine reading capabilities of our model, we confirm that a full document needs to be \"read\" to produce high quality extracts allowing a rich contextual reasoning, in contrast to previous answer selection approaches that often measure a score between each sentence in the document and the question and return the sentence with highest score in an isolated manner (Yin et al., 2016; .",
"Our model with ISF and IDF scores as external features achieves competitive results for answer selection.",
"Our ensemble model combining scores from our model and word overlap scores using a logistic regression layer achieves state-ofthe-art results on the popular question answering datasets WikiQA and NewsQA (Trischler et al., 2016) , and it obtains comparable results to the state of the art for SQuAD (Rajpurkar et al., 2016) .",
"We also evaluate our approach on the MSMarco dataset (Nguyen et al., 2016) and elaborate on the behavior of our machine reader in a scenario where each candidate answer sentence is contextually independent of each other.",
"Document Modeling For Sentence Extraction Given a document D consisting of a sequence of n sentences (s 1 , s 2 , ..., s n ) , we aim at labeling each sentence s i in D with a label y i ∈ {0, 1} where y i = 1 indicates that s i is extraction-worthy and 0 otherwise.",
"Our architecture resembles those previously proposed in the literature (Cheng and Lapata, 2016; Nallapati et al., 2017) .",
"The main components include a sentence encoder, a document encoder, and a novel sentence extractor (see Figure 1) that we describe in more detail below.",
"The novel characteristics of our model are that each sentence is labeled by implicitly estimating its (local and global) relevance to the document and by directly attending to some external information for importance cues.",
"Sentence Encoder A core component of our model is a convolutional sentence encoder (Kim, 2014; Kim et al., 2016) which encodes sentences into continuous representations.",
"We use temporal narrow convolution by applying a kernel filter K of width h to a window of h words in sentence s to produce a new feature.",
"This filter is applied to each possible window of words in s to produce a feature map f ∈ R k−h+1 where k is the sentence length.",
"We then apply max-pooling over time over the feature map f and take the maximum value as the feature corresponding to this particular filter K. We use multiple kernels of various sizes and each kernel multiple times to construct the representation of a sentence.",
"In Figure 1 , ker-nels of size 2 (red) and 4 (blue) are applied three times each.",
"The max-pooling over time operation yields two feature lists f K 2 and f K 4 ∈ R 3 .",
"The final sentence embeddings have six dimensions.",
"Document Encoder The document encoder composes a sequence of sentences to obtain a document representation.",
"We use a recurrent neural network with LSTM cells to avoid the vanishing gradient problem when training long sequences (Hochreiter and Schmidhuber, 1997) .",
"Given a document D consisting of a sequence of sentences (s 1 , s 2 , .",
".",
".",
", s n ), we follow common practice and feed the sentences in reverse order (Sutskever et al., 2014; Filippova et al., 2015) .",
"Sentence Extractor Our sentence extractor sequentially labels each sentence in a document with 1 or 0 by implicitly estimating its relevance in the document and by directly attending to the external information for importance cues.",
"It is implemented with another RNN with LSTM cells with an attention mechanism (Bahdanau et al., 2015) and a softmax layer.",
"Our attention mechanism differs from the standard practice of attending intermediate states of the input (encoder).",
"Instead, our extractor attends to a sequence of p pieces of external information E : (e 1 , e 2 , ..., e p ) relevant for the task (e.g., e i is a title or an image caption for summarization) for cues.",
"At time t i , it reads sentence s i and makes a binary prediction, conditioned on the document representation (obtained from the document encoder), the previously labeled sentences and the external information.",
"This way, our labeler is able to identify locally and globally important sentences within the document which correlate well with the external information.",
"Given sentence s t at time step t, it returns a probability distribution over labels as: p(y t |s t , D, E) = softmax(g(h t , h t )) (1) g(h t , h t ) = U o (V h h t + W h h t ) (2) h t = LSTM(s t , h t−1 ) h t = p i=1 α (t,i) e i , where α (t,i) = exp(h t e i ) j exp(h t e j ) where g(·) is a single-layer neural network with parameters U o , V h and W h .",
"h t is an intermedi- Figure 1 : Hierarchical encoder-decoder model for sentence extraction with external attention.",
"s 1 , .",
".",
".",
", s 5 are sentences in the document and, e 1 , e 2 and e 3 represent external information.",
"For the extractive summarization task, e i s are external information such as title and image captions.",
"For the answers selection task, e i s are the query and word overlap features.",
"ate RNN state at time step t. The dynamic context vector h t is essentially the weighted sum of the external information (e 1 , e 2 , .",
".",
".",
", e p ).",
"Figure 1 summarizes our model.",
"Sentence Extraction Applications We validate our model on two sentence extraction problems: extractive document summarization and answer selection for machine reading comprehension.",
"Both these tasks require local and global contextual reasoning about a given document.",
"As such, they test the ability of our model to facilitate document modeling using external information.",
"Extractive Summarization An extractive summarizer aims to produce a summary S by selecting m sentences from D (where m < n).",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i should be included in the summary) by assigning score p(y i |s i , D, E , θ) quantifying the relevance of s i to the summary.",
"We assemble a summary S by selecting m sentences with top p(y i = 1|s i , D, E , θ) scores.",
"We formulate external information E as the sequence of the title and the image captions associated with the document.",
"We use the convolutional sentence encoder to get their sentence-level representations.",
"Answer Selection Given a question q and a document D , the goal of the task is to select one candidate sentence s i ∈ D in which the answer exists.",
"In this setting, our sentence extractor sequentially predicts label y i ∈ {0, 1} (where 1 means that s i contains the answer) and assign score p(y i |s i , D, E , θ) quantifying s i 's relevance to the query.",
"We return as answer the sentence s i with the highest p(y i = 1|s i , D, E , θ) score.",
"We treat the question q as external information and use the convolutional sentence encoder to get its sentence-level representation.",
"This simplifies Eq.",
"(1) and (2) as follow: p(y t |s t , D, q) = softmax(g(h t , q)) (3) g(h t , q) = U o (V h h t + W q q), where V h and W q are network parameters.",
"We exploit the simplicity of our model to further assimilate external features relevant for answer selection: the inverse sentence frequency (ISF, (Trischler et al., 2016) ), the inverse document frequency (IDF) and a modified version of the ISF score which we call local ISF.",
"Trischler et al.",
"(2016) have shown that a simple ISF baseline (i.e., a sentence with the highest ISF score as an answer) correlates well with the answers.",
"The ISF score α s i for the sentence s i is computed as α s i = w∈s i ∩q IDF(w), where IDF is the inverse document frequency score of word w, defined as: IDF(w) = log N Nw , where N is the total number of sentences in the training set and N w is the number of sentences in which w appears.",
"Note that, s i ∩ q refers to the set of words that appear both in s i and in q.",
"Local ISF is calculated in the same manner as the ISF score, only with setting the total number of sentences (N ) to the number of sentences in the article that is being analyzed.",
"More formally, this modifies Eq.",
"(3) as follows: p(y t |s t , D, q) = softmax(g(h t , q, α t , β t , γ t )), (4) where α t , β t and γ t are the ISF, IDF and local ISF scores (real values) of sentence s t respectively .",
"The function g is calculated as follows: g(h t , q, α t , β t , γ t ) =U o (V h h t + W q q + W isf (α t · 1)+ W idf (β t · 1) + W lisf (γ t · 1) , where W isf , W idf and W lisf are new parameters added to the network and 1 is a vector of 1s of size equal to the sentence embedding size.",
"In Figure 1 , these external feature vectors are represented as 6-dimensional gray vectors accompanied with dashed arrows.",
"Experiments and Results This section presents our experimental setup and results assessing our model in both the extractive summarization and answer selection setups.",
"In the rest of the paper, we refer to our model as XNET for its ability to exploit eXternal information to improve document representation.",
"Extractive Document Summarization Summarization Dataset We evaluated our models on the CNN news highlights dataset (Hermann et al., 2015) .",
"2 We used the standard splits of Hermann et al.",
"(2015) for training, validation, and testing (90,266/1,220/1,093 documents).",
"We followed previous studies (Cheng and Lapata, 2016; Nallapati et al., 2016 Nallapati et al., , 2017 See et al., 2017; Tan and Wan, 2017) in assuming that the \"story highlights\" associated with each article are gold-standard abstractive summaries.",
"We trained our network on a named-entity-anonymized version of news articles.",
"However, we generated deanonymized summaries and evaluated them against gold summaries to facilitate human evaluation and to make human evaluation comparable to automatic evaluation.",
"To train our model, we need documents annotated with sentence extraction information, i.e., each sentence in a document is labeled with 1 (summary-worthy) or 0 (not summary-worthy).",
"We followed Nallapati et al.",
"(2017) and automatically extracted ground truth labels such that all positively labeled sentences from an article collectively give the highest ROUGE (Lin and Hovy, 2003) score with respect to the gold summary.",
"We used a modified script of Hermann et al.",
"(2015) to extract titles and image captions, and we associated them with the corresponding articles.",
"All articles get associated with their titles.",
"The availability of image captions varies from 0 to 414 per article, with an average of 3 image captions.",
"There are 40% CNN articles with at least one image caption.",
"All sentences, including titles and image captions, were padded with zeros to a sentence length of 100.",
"All input documents were padded with zeros to a maximum document length of 126.",
"For each document, we consider a maximum of 10 image captions.",
"We experimented with various numbers (1, 3, 5, 10 and 20) of image captions on the validation set and found that our model performed best with 10 image captions.",
"We refer the reader to the supplementary material for more implementation details to replicate our results.",
"Comparison Systems We compared the output of our model against the standard baseline of simply selecting the first three sentences from each document as the summary.",
"We refer to this baseline as LEAD in the rest of the paper.",
"We also compared our system against the sentence extraction system of Cheng and Lapata (2016) .",
"We refer to this system as POINTERNET as the neural attention architecture in Cheng and Lapata (2016) resembles the one of Pointer Networks .",
"3 It does not exploit any external information.",
"4 The architecture of POINTERNET is closely related to our model without external information.",
"4 Adding external information to POINTERNET is an in- Automatic Evaluation To automatically assess the quality of our summaries, we used ROUGE (Lin and Hovy, 2003) , a recall-oriented metric, to compare our model-generated summaries to manually-written highlights.",
"6 Previous work has reported ROUGE-1 (R1) and ROUGE-2 (R2) scores to access informativeness, and ROUGE-L (RL) to access fluency.",
"In addition to R1, R2 and RL, we also report ROUGE-3 (R3) and ROUGE-4 (R4) capturing higher order n-grams overlap to assess informativeness and fluency simultaneously.",
"teresting direction of research but we do not pursue it here.",
"It requires decoding with multiple types of attentions and this is not the focus of this paper.",
"5 We are unable to compare our results to the extractive system of Nallapati et al.",
"(2017) because they report their results on the DailyMail dataset and their code is not available.",
"The abstractive systems of Chen et al.",
"(2016) and Tan and Wan (2017) report their results on the CNN dataset, however, their results are not comparable to ours as they report on the full-length F1 variants of ROUGE to evaluate their abstractive summaries.",
"We report ROUGE recall scores which is more appropriate to evaluate our extractive summaries.",
"6 We used pyrouge, a Python package, to compute all our ROUGE scores with parameters \"-a -c 95 -m -n 4 -w 1.2.\"",
"We report our results on both full length (three sentences with the top scores as the summary) and fixed length (first 75 bytes and 275 bytes as the summary) summaries.",
"For full length summaries, our decision of selecting three sentences is guided by the fact that there are 3.11 sentences on average in the gold highlights of the training set.",
"We conduct our ablation study on the validation set with full length ROUGE scores, but we report both fixed and full length ROUGE scores for the test set.",
"We experimented with two types of external information: title (TITLE) and image captions (CAPTION).",
"In addition, we experimented with the first sentence (FS) of the document as external information.",
"Note that the latter is not external information, it is a sentence in the document.",
"However, we wanted to explore the idea that the first sentence of the document plays a crucial part in generating summaries (Rush et al., 2015; Nallapati et al., 2016) .",
"XNET with FS acts as a baseline for XNET with title and image captions.",
"We report the performance of several variants of XNET on the validation set in Table 1 .",
"We also compare them against the LEAD baseline and POINTERNET.",
"These two systems do not use any additional information.",
"Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET.",
"When the title (TITLE), image captions (CAPTION) and the first sentence (FS) are used separately as additional information, XNET performs best with TITLE as its external information.",
"Our result demonstrates the importance of the title of the document in extractive summarization (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001) .",
"The performance with TITLE and CAP-TION is better than that with FS.",
"We also tried possible combinations of TITLE, CAPTION and FS.",
"All XNET models are superior to the ones without any external information.",
"XNET performs best when TITLE and CAPTION are jointly used as external information (55.4%, 21.8%, 11.8%, 7.5%, and 49.2% for R1, R2, R3, R4, and RL respectively).",
"It is better than the the LEAD baseline by 3.7 points on average and than POINTERNET by 1.8 points on average, indicating that external information is useful to identify the gist of the document.",
"We use this model for testing purposes.",
"Our final results on the test set are shown in to XNET.",
"This result could be because LEAD (always) and POINTERNET (often) include the first sentence in their summaries, whereas, XNET is better capable at selecting sentences from various document positions.",
"This is not captured by smaller summaries of 75 bytes, but it becomes more evident with longer summaries (275 bytes and full length) where XNET performs best across all ROUGE scores.",
"We note that POINTERNET outperforms LEAD for 75-byte summaries, then its performance drops behind LEAD for 275-byte summaries, but then it outperforms LEAD for full length summaries on the metrics R1, R2 and RL.",
"It shows that POINTERNET with its attention over sentences in the document is capable of exploring more than first few sentences in the document, but it is still behind XNET which is better at identifying salient sentences in the document.",
"XNET performs significantly better than POINTERNET by 0.8 points for 275-byte summaries and by 1.9 points for full length summaries, on average for all ROUGE scores.",
"Human Evaluation We complement our automatic evaluation results with human evaluation.",
"We randomly selected 20 articles from the test set.",
"Annotators were presented with a news article and summaries from four different systems.",
"These include the LEAD baseline, POINTERNET, XNET and the human authored highlights.",
"We followed the guidelines in Cheng and Lapata (2016) , and asked our participants to rank the summaries from best (1st) to worst (4th) in order of informativeness (does the summary capture important information in the article?)",
"and fluency (is the summary written in well-formed English?).",
"We did not allow any ties and we only sampled articles with nonidentical summaries.",
"We assigned this task to five annotators who were proficient English speakers.",
"Each annotator was presented with all 20 articles.",
"The order of summaries to rank was randomized per article.",
"An example of summaries our subjects ranked is provided in the supplementary material.",
"The results of our human evaluation study are shown in Table 3 .",
"As one might imagine, HUMAN gets ranked 1st most of the time (41%).",
"However, it is closely followed by XNET which ranked 1st 28% of the time.",
"In comparison, POINTER-NET and LEAD were mostly ranked at 3rd and 4th places.",
"We also carried out pairwise comparisons between all models in Table 3 for their statistical significance using a one-way ANOVA with post-hoc Tukey HSD tests with (p < 0.01).",
"It showed that XNET is significantly better than LEAD and POINTERNET, and it does not differ significantly from HUMAN.",
"On the other hand, POINTERNET does not differ significantly from LEAD and it differs significantly from both XNET and HUMAN.",
"The human evaluation results corroborates our empirical results in Table 1 and Table 2: XNET is better than LEAD and POINT-ERNET in producing informative and fluent summaries.",
"Answer Selection Question Answering Datasets We run experiments on four datasets collected for open domain question-answering tasks: WikiQA , SQuAD (Rajpurkar et al., 2016) , NewsQA (Trischler et al., 2016) , and MSMarco (Nguyen et al., 2016) .",
"NewsQA was especially designed to present lexical and syntactic divergence between questions and answers.",
"It contains 119,633 questions posed by crowdworkers on 12,744 CNN articles previously collected by Hermann et al.",
"(2015) .",
"In a similar manner, SQuAD associates 100,000+ question with a Wikipedia article's first paragraph, for 500+ previously chosen articles.",
"WikiQA was collected by mining web-searching query logs and then associating them with the summary section of the Wikipedia article presumed to be related to the topic of the query.",
"A similar collection procedure was followed to create MSMarco with the difference that each candidate answer is a whole paragraph from a different browsed website associated with the query.",
"We follow the widely used setup of leaving out unanswered questions (Trischler et al., 2016; and adapt the format of each dataset to our task of answer sentence selection by labeling a candidate sentence with 1 if any answer span is contained in that sentence.",
"In the case of MS-Marco, each candidate paragraph comes associated with a label, hence we treat each one as a single long sentence.",
"Since SQuAD keeps the official test dataset hidden and MSMarco does not provide labels for its released test set, we report results on their official validation sets.",
"For validation, we set apart 10% of each official training set.",
"Our dataset splits consist of 92, 525, 5, 165 and 5, 124 samples for NewsQA; 79, 032, 8, 567, and 10, 570 for SQuAD; 873, 122, and 79, 704, 9, 706, and 9, 650 for MSMarco, for training, validation, and testing respectively.",
"Comparison Systems We compared the output of our model against the ISF (Trischler et al., 2016) and LOCALISF baselines.",
"Given an article, the sentence with the highest ISF score is selected as an answer for the ISF baseline and the sentence with the highest local ISF score for the LOCALISF baseline.",
"We also compare our model against a neural network (PAIRCNN) that encodes (question, candidate) in an isolated manner as in previous work (Yin et al., 2016; .",
"The architecture uses the sentence encoder explained in earlier sections to learn the question and candidate representations.",
"The distribution over labels is given by p(y t |q) = p(y t |s t , q) = softmax(g(s t , q)) where g(s t , q) = ReLU(W sq · [s t ; q] + b sq ).",
"In addition, we also compare our model against AP-CNN (dos Santos et al., 2016) , ABCNN (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"We experiment with several variants of our model.",
"XNET is the vanilla version of our sen- SQuAD WikiQA NewsQA MSMarco ACC MAP MRR ACC MAP MRR ACC MAP MRR ACC MAP MRR WRD CNT 77.84 27.50 27.77 51.05 48.91 49.24 44.67 46.48 46.91 20.16 19.37 19.51 WGT WRD CNT 78.43 28.10 28.38 49.79 50.99 51.32 45.24 48.20 48.64 20.50 20.06 20.23 AP-CNN ----68.86 69.57 ------ABCNN ----69.21 71.08 ------L.D.C ----70.58 72.26 ------KV-MemNN ----70.69 72.65 ------LOCALISF 79.50 27.78 28.05 49.79 49.57 50.11 44.69 48.40 46.48 20.21 20.22 20.39 ISF 78.85 28.09 28.36 48.52 46.53 46.72 45.61 48.57 48.99 20.52 20.07 20.23 PAIRCNN 32.53 46.34 46.35 32.49 39.87 38.71 (Yin et al., 2016) , L.D.C (Wang and Jiang, 2017) , KV-MemNN (Miller et al., 2016) , and COMPAGGR, a state-of-the-art system by .",
"(WGT) WRD CNT stands for the (weighted) word count baseline.",
"See text for more details.",
"tence extractor conditioned only on the query q as external information (Eq.",
"(3) ).",
"XNET+ is an extension of XNET which uses ISF, IDF and local ISF scores in addition to the query q as external information (Eqn.",
"(4) ).",
"We also experimented with a baseline XNETTOPK where we choose the top k sentences with highest ISF score, and then among them choose the one with the highest probability according to XNET.",
"In our experiments, we set k = 5.",
"In the end, we experimented with an ensemble network LRXNET which combines the XNET score, the COMPAGGR score and other word-overlap-based scores (tweaked and optimized for each dataset separately) for each sentence using a logistic regression classifier.",
"It uses ISF and LocalISF scores for NewsQA, IDF and ISF scores for SQuAD, sentence length, IDF and ISF scores for WikiQA, and word overlap and ISF score for MSMarco.",
"We refer the reader to the supplementary material for more implementation and optimization details to replicate our results.",
"Evaluation Metrics We consider metrics that evaluate systems that return a ranked list of candidate answers: mean average precision (MAP), mean reciprocal rank (MRR), and accuracy (ACC).",
"Results Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco.",
"Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate in isolation.",
"Secondly, we can observe that ISF is indeed a strong baseline that outperforms XNET.",
"This means that just \"reading\" the document using a vanilla version of XNET is not sufficient, and help is required through a coarse filtering.",
"Indeed, we observe that XNET+ outperforms all baselines except for COMPAGGR.",
"Our ensemble model LRXNET can ultimately surpass COMPAGGR on majority of the datasets.",
"This consistent behavior validates the machine reading capabilities and the improved document representation with external features of our model for answer selection.",
"Specifically, the combination of document reading and word overlap features is required to be done in a soft manner, using a classification technique.",
"Using it as a hard constraint, with XNETTOPK, does not achieve the best result.",
"We believe that often the ISF score is a better indicator of answer presence in the vicinity of certain candidate instead of in the candidate itself.",
"As such, XNET+ is capable of using this feature in datasets with richer context.",
"It is worth noting that the improvement gained by LRXNET over the state-of-the-art follows a pattern.",
"For the SQuAD dataset, the results are comparable (less than 1%).",
"However, the improvement for WikiQA reaches ∼3% and then the gap shrinks again for NewsQA, with an improvement of ∼1%.",
"This could be explained by the fact that each sample of the SQuAD is a paragraph, compared to an article summary for WikiQA, and to an entire article for NewsQA.",
"Hence, we further strengthen our hypothesis that a richer context is needed to achieve better results, in this case expressed as document length, but as the length of the context increases the limitation of sequential models to learn from long rich sequences arises.",
"7 Interestingly, our model lags behind COM-PAGGR on the MSMarco dataset.",
"It turns out this is due to contextual independence between candidates in the MSMarco dataset, i.e., each candidate is a stand-alone paragraph in this dataset, in contrast to contextually dependent candidate sentences from a document in the NewsQA, SQuAD and WikiQA datasets.",
"As a result, our models (XNET+ and LRXNET) with document reading abilities perform poorly.",
"This can be observed by the fact that XNET and PAIRCNN obtain comparable results.",
"COMPAGGR performs better because comparing each candidate independently is a better strategy.",
"Conclusion We describe an approach to model documents while incorporating external information that informs the representations learned for the sentences in the document.",
"We implement our approach through an attention mechanism of a neural network architecture for modeling documents.",
"Our experiments with extractive document summarization and answer selection tasks validates our model in two ways: first, we demonstrate that external information is important to guide document modeling for natural language understanding tasks.",
"Our model uses image captions and the title of the document for document summarization, and the query with word overlap features for answer selection and outperforms its counterparts that do not use this information.",
"Second, our external attention mechanism successfully guides the learning of the document representation for the relevant end goal.",
"For answer selection, we show that inserting the query with word overlap features using our external attention mechanism outperforms state-of-the-art systems that naturally also have access to this information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Document Modeling For Sentence Extraction",
"Sentence Extraction Applications",
"Experiments and Results",
"Extractive Document Summarization",
"Answer Selection",
"Conclusion"
]
} | GEM-SciDuet-train-124#paper-1342#slide-12 | XNet variants | model SQuAD WikiQA NewsQA MSMarco
reporting accuracy, similar patters for MRR and MAP
XNet: only considering q
XNet+: considers q, IDF, ISF, LocalISF
XNetTopk: choose top k sentences based on ISF and then XNet
LRXNet: ensemble <XNet, CompAggr (Wang et al., 2017), classifier considering word overlap scores> | model SQuAD WikiQA NewsQA MSMarco
reporting accuracy, similar patters for MRR and MAP
XNet: only considering q
XNet+: considers q, IDF, ISF, LocalISF
XNetTopk: choose top k sentences based on ISF and then XNet
LRXNet: ensemble <XNet, CompAggr (Wang et al., 2017), classifier considering word overlap scores> | [] |