RedTachyon
commited on
Commit
•
b161cf5
1
Parent(s):
928dba4
Upload folder using huggingface_hub
Browse files- 8HuyXvbvqX/11_image_0.png +3 -0
- 8HuyXvbvqX/1_image_0.png +3 -0
- 8HuyXvbvqX/6_image_0.png +3 -0
- 8HuyXvbvqX/8HuyXvbvqX.md +675 -0
- 8HuyXvbvqX/8HuyXvbvqX_meta.json +25 -0
- 8HuyXvbvqX/9_image_0.png +3 -0
8HuyXvbvqX/11_image_0.png
ADDED
Git LFS Details
|
8HuyXvbvqX/1_image_0.png
ADDED
Git LFS Details
|
8HuyXvbvqX/6_image_0.png
ADDED
Git LFS Details
|
8HuyXvbvqX/8HuyXvbvqX.md
ADDED
@@ -0,0 +1,675 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Understanding Linearity Of Cross-Lingual Word Embedding Mappings
|
2 |
+
|
3 |
+
Xutan Peng †*x.peng@shef.ac.uk*
|
4 |
+
|
5 |
+
Chenghua Lin †*c.lin@shef.ac.uk*
|
6 |
+
|
7 |
+
Mark Stevenson † *mark.stevenson@shef.ac.uk* Chen Li ‡*palchenli@tencent.com*
|
8 |
+
†*Department of Computer Science, The University of Sheffield*
|
9 |
+
‡*Applied Research Center, Tencent PCG*
|
10 |
+
Reviewed on OpenReview: *https: // openreview. net/ forum? id= 8HuyXvbvqX*
|
11 |
+
|
12 |
+
## Abstract
|
13 |
+
|
14 |
+
The technique of Cross-Lingual Word Embedding (CLWE) plays a fundamental role in tackling Natural Language Processing challenges for low-resource languages. Its dominant approaches assumed that the relationship between embeddings could be represented by a linear mapping, but there has been no exploration of the conditions under which this assumption holds. Such a research gap becomes very critical recently, as it has been evidenced that relaxing mappings to be non-linear can lead to better performance in some cases. We, for the first time, present a theoretical analysis that identifies the preservation of analogies encoded in monolingual word embeddings as a *necessary and sufficient* condition for the ground-truth CLWE mapping between those embeddings to be linear. On a novel crosslingual analogy dataset that covers five representative analogy categories for twelve distinct languages, we carry out experiments which provide direct empirical support for our theoretical claim. These results offer additional insight into the observations of other researchers and contribute inspiration for the development of more effective cross-lingual representation learning strategies.
|
15 |
+
|
16 |
+
## 1 Introduction
|
17 |
+
|
18 |
+
Cross-Lingual Word Embedding (CLWE) methods encode words from two or more languages in a shared high-dimensional space in which vectors representing lexical items with similar meanings (regardless of language) are closely located. Compared with alternative techniques, such as cross-lingual pre-trained language models, CLWE is orders of magnitude more efficient in terms of training corpora1 and computational power requirements2. As a result, the topic has received significant attention as a promising means to support Natural Language Processing (NLP) for low-resource languages (including ancient languages) and has been used for a range of applications, e.g., Machine Translation (Herold et al., 2021), Sentiment Analysis (Sun et al., 2021), Question Answering (Zhou et al., 2021) and Text Summarisation (Peng et al., 2021b).
|
19 |
+
|
20 |
+
1For example, Kim et al. (2020) show that inadequate monolingual data size (fewer than one million *sentences*) is likely to lead to collapsed performance of XLM (Lample & Conneau, 2019) even for etymologically close language pairs. Meanwhile, CLWE can easily align word embeddings for languages such as African Amharic and Tigrinya for which only have millions of tokens (Zhang et al., 2020) are available.
|
21 |
+
|
22 |
+
2For example, XLM-R (Conneau et al., 2020) was trained on 500× Tesla V100 GPUs, whereas the training of VecMap (Artetxe et al., 2018) can be finished on a single Titan Xp GPU.
|
23 |
+
The most successful CLWE approach, CLWE alignment, learns mappings between independently trained monolingual word vectors with very little, or even no, cross-lingual supervision (Ruder et al., 2019). One of the key challenges of these algorithms is the design of mapping functions. Motivated by the observation that word embeddings for different languages tend to be similar in structure (Mikolov et al., 2013b), many researchers have assumed that the mappings between cross-lingual word vectors are linear (Faruqui & Dyer, 2014; Lample et al., 2018b; Li et al., 2021).
|
24 |
+
|
25 |
+
Although models based on this assumption have demonstrated strong performance, it has recently been questioned. Researchers have claimed that the structure of multilingual word embeddings may not always be similar (Søgaard et al., 2018; Dubossarsky et al., 2020; Vulić et al., 2020), which led to the emergence of approaches relaxing the mapping linearity (Glavaš & Vulić, 2020; Wang et al., 2021a) or using nonlinear functions (Mohiuddin et al., 2020; Ganesan et al., 2021). These new methods can sometimes outperform the traditional linear counterparts, causing a debate around the suitability, or otherwise, of linear mappings. However, to the best of our knowledge, the majority of previous CLWE work has focused on empirical findings, and there has been no in-depth analysis of the conditions for the linearity assumption.
|
26 |
+
|
27 |
+
This paper approaches the problem from a novel perspective by establishing a link between the linearity of CLWE
|
28 |
+
mappings and the preservation of encoded monolingual analogies. Our work is motivated by the observation that word analogies can be solved via the composition of semantics based on vector arithmetic (Mikolov et al.,
|
29 |
+
2013c) and such linguistic regularities might be transferable across languages. More specifically, we notice that if analogies encoded in the embeddings of one language also appear in the embeddings of another, the corresponding multilingual vectors tend to form similar shapes (see Fig. 1), suggesting the CLWE mapping between them should be approximately linear. In other words, we suspect that the preservation of analogy encoding indicates the linearity of CLWE mappings.
|
30 |
+
|
31 |
+
![1_image_0.png](1_image_0.png)
|
32 |
+
|
33 |
+
Figure 1: Wiki vectors (see § 4.3) of English (left)
|
34 |
+
and French (right) analogy word pairs based on PCA (Wold et al., 1987). NB: We manually rotate the visualisation to highlight structural similarity.
|
35 |
+
|
36 |
+
Our hypothesis is verified both theoretically and empirically. We make a justification that the preservation of analogy encoding should be a *sufficient and necessary* condition for the linearity of CLWE mappings. To provide empirical validation, we first define indicators to qualify the linearity of the ground-truth CLWE
|
37 |
+
mapping (SLMP) and its preservation of analogy encoding (SPAE). Next, we build a novel cross-lingual word analogy corpus containing five analogy categories (both semantic and syntactic) for twelve languages that pose pairs of diverse etymological distances. We then benchmark SLMP and SPAE on three representative series of word embeddings. In all setups tested, we observe a significant correlation between SLMP and SPAE, which provides empirical support for our hypothesis. With this insight, we offer explanations to why the linearity assumption occasionally fails, and consequently, discuss how our research can benefit the development of more effective CLWE algorithms. We also recommend the use of SPAE to assess mapping linearity in CLWE applications. We release our data and code at https://github.com/Pzoom522/xANLG.
|
38 |
+
|
39 |
+
This paper's contributions are summarised as:
|
40 |
+
- Introduces the previously unnoticed relationship between the linearity of CLWE mappings and the preservation of encoded word analogies.
|
41 |
+
|
42 |
+
- Provides a theoretical analysis of this relationship. - Describes the construction of a novel cross-lingual analogy test set with five categories of word pairs aligned across twelve diverse languages.
|
43 |
+
|
44 |
+
- Provides empirical evidence of our claim and introduces SPAE to estimate the analogy encoding preservation (and therefore the mapping linearity). We additionally demonstrate that SPAE can be used as an indicator of the relationship between monolingual word embeddings, independently of trained CLWEs.
|
45 |
+
|
46 |
+
- Discusses implications of these results, regarding the interpretation of previous results and as well as the future development of cross-lingual representations.
|
47 |
+
|
48 |
+
## 2 Related Work
|
49 |
+
|
50 |
+
Linearity of CLWE Mapping. Mikolov et al. (2013b) discovered that the vectors of word translations exhibit similar structures across different languages. Researchers made use of this by assuming that mappings between multilingual embeddings could be modelled using simple linear transformations. This framework turned out to be effective in numerous studies which demonstrated that linear mappings are able to produce accurate CLWEs with weak or even no supervision (Artetxe et al., 2017; Lample et al., 2018b; Artetxe et al.,
|
51 |
+
2018; Wang et al., 2020; Li et al., 2021).
|
52 |
+
|
53 |
+
One way in which this is achieved is through the application of a normalisation technique called "mean centring", which (for each language) subtracts the average monolingual word vector from all word embeddings, so that this mean vector becomes the origin of the vector space (Xing et al., 2015; Artetxe et al., 2016; Ruder et al., 2019). This step has the effect of simplifying the mapping from being *affine* (i.e., equivalent to a shifting operation plus a linear mapping) to *linear* by removing the shifting operation.
|
54 |
+
|
55 |
+
However, recent work has cast doubt on this linearity assumption, leading researchers to experiment with the use of non-linear mappings. Nakashole & Flauger (2018) and Wang et al. (2021a) pointed out that structural similarities may only hold across particular regions of the embedding spaces rather than over their entirety.
|
56 |
+
|
57 |
+
Søgaard et al. (2018) examined word vectors trained using different corpora, models and hyper-parameters, and concluded configuration dissimilarity between the monolingual embeddings breaks the assumption that the mapping between them is linear. Patra et al. (2019) investigated various language pairs and discovered that a higher etymological distance is associated with degraded the linearity of CLWE mappings. Vulić et al. (2020) additionally argued that factors such as limited monolingual resources may also weaken the linearity assumption.
|
58 |
+
|
59 |
+
These findings motivated work on designing non-linear mapping functions in an effort to improve CLWE performance. For example, Nakashole (2018) and Wang et al. (2021a) relaxed the linearity assumption by combining multiple linear CLWE mappings; Patra et al. (2019) developed a semi-supervised model that loosened the linearity restriction; Lubin et al. (2019) attempted to reduce the dissimilarity between multilingual embedding manifolds by refining learnt dictionaries; Glavaš & Vulić (2020) first trained a globally optimal linear mapping, then adjusted vector positions to achieve better accuracy; Mohiuddin et al. (2020)
|
60 |
+
used two independently pre-trained auto-encoders to introduce non-linearity to CLWE mappings; Ganesan et al. (2021) obtained inspirations via the back translation paradigm, hence framing CLWE training as to explicitly solve a non-linear and bijective transformation between multilingual word embeddings. Despite these non-linear mappings outperforming their linear counterparts in many setups, in some settings the linear mappings still seem more successful, e.g., the alignment between Portuguese and English word embeddings in Ganesan et al. (2021). Moreover, training non-linear mappings is typically more complex and thus requires more computational resources. Albeit at the significant recent attention to this problem by the research community, it is still unclear under what condition the linearity of CLWE mappings holds. This paper makes the first attempt to close this research gap by providing both theoretical and empirical contributions.
|
61 |
+
|
62 |
+
Analogy Encoding. Analogy is a fundamental concept within cognitive science (Gentner, 1983) that has received significant focus from the NLP community, since the observation that it can be represented using word embeddings and vector arithmetic (Mikolov et al., 2013c). A popular example based on the analogy
|
63 |
+
"*king is to man as queen is to woman*" shows that the vectors representing the four terms (xking, xman, x*queen* and x*woman*) exhibit the following relation:
|
64 |
+
xking − xman ≈ xqueen − x*woman*. (1)
|
65 |
+
Since this discovery, the task of analogy completion has commonly been employed to evaluate the quality of pre-trained word embeddings (Mikolov et al., 2013c; Pennington et al., 2014; Levy & Goldberg, 2014a). This line of research has directly benefited downstream applications (e.g., representation bias removal (Prade &
|
66 |
+
Richard, 2021)) and other relevant domains (e.g., automatic knowledge graph construction (Wang et al.,
|
67 |
+
2021b)). Theoretical analysis has demonstrated a link between embeddings' analogy encoding and the Pointwise Mutual Information of the training corpus (Arora et al., 2016; Gittens et al., 2017; Allen &
|
68 |
+
Hospedales, 2019; Ethayarajh et al., 2019; Fournier & Dunbar, 2021). Nonetheless, as far as we are aware, the connection between the preservation of analogy encoding and the linearity of CLWE mappings has not been previously investigated.
|
69 |
+
|
70 |
+
## 3 Theoretical Basis
|
71 |
+
|
72 |
+
We denote a ground-truth CLWE mapping as M : X → Y, where X and Y are monolingual word embeddings independently trained for languages LX and LY, respectively.
|
73 |
+
|
74 |
+
Proposition. Encoded analogies are preserved during the CLWE mapping *M ⇐⇒ M* is affine.
|
75 |
+
|
76 |
+
Remarks. Following Eq. (1), the preservation of analogy encoding under a mapping can be formalised as
|
77 |
+
|
78 |
+
$$\mathbf{x}_{\alpha}-\mathbf{x}_{\beta}=\mathbf{x}_{\gamma}-\mathbf{x}_{\theta}\implies\mathcal{M}(\mathbf{x}_{\alpha})-\mathcal{M}(\mathbf{x}_{\beta})=\mathcal{M}(\mathbf{x}_{\gamma})-\mathcal{M}(\mathbf{x}_{\theta}),\tag{2}$$
|
79 |
+
|
80 |
+
where xα, xβ, xγ, xθ ∈ X.
|
81 |
+
|
82 |
+
If M is affine, for d-dimensional monolingual embeddings X we have
|
83 |
+
|
84 |
+
$$\left({\mathrm{3}}\right)$$
|
85 |
+
$${\mathcal{M}}(\mathbf{x})\,:=\,M\mathbf{x}+\mathbf{b},$$
|
86 |
+
$${\mathcal{M}}({\vec{0}})\,=\,{\vec{0}}.$$
|
87 |
+
M(x) := Mx + b, (3)
|
88 |
+
where $x\in X$, $M\in\mathbb{R}^{d\times d}$, and $\pmb{b}\in\mathbb{R}^{d\times1}$.
|
89 |
+
Proof: Eq. (2) =⇒ Eq. (3). To begin with, by adopting the mean centring operation in § 2, we shift the coordinates of the space of X, ensuring M(⃗0) = ⃗0. (4)
|
90 |
+
This step greatly simplifies the derivations afterwards, because from now on we just need to demonstrate that M is a *linear mapping*, i.e., it can be written as Mx. By definition, this is equivalent to showing that M
|
91 |
+
preserves both the operations of addition (a.k.a. additivity) and scalar multiplication (a.k.a. homogeneity).
|
92 |
+
|
93 |
+
Additivity can be proved by observing that (xi + xj ) − xj = xi −⃗0 and therefore,
|
94 |
+
|
95 |
+
$(\mathbf{x_{i}}+\mathbf{x_{j}})-\mathbf{x_{j}}=\mathbf{x_{i}}-\vec{0}\ \stackrel{{\text{Eq.~{}(2)}}}{{\longrightarrow}}\mathcal{M}(\mathbf{x_{i}}+\mathbf{x_{j}})-\mathcal{M}(\mathbf{x_{j}})=\mathcal{M}(\mathbf{x_{i}})-\mathcal{M}(\vec{0})$ $\stackrel{{\text{Eq.~{}(4)}}}{{\longrightarrow}}\mathcal{M}(\mathbf{x_{i}}+\mathbf{x_{j}})=\mathcal{M}(\mathbf{x_{i}})+\mathcal{M}(\mathbf{x_{j}})$.
|
96 |
+
$$\left(5\right)$$
|
97 |
+
Homogeneity can be proved in four steps.
|
98 |
+
|
99 |
+
- **Step 1**: Observe that ⃗0 − xi = −xi −⃗0, similar to Eq. (5) we can show that
|
100 |
+
|
101 |
+
$$\begin{array}{c}{{\vec{0}-\mathbf{x_{i}}=-\mathbf{x_{i}}-\vec{0}\,\stackrel{\mathrm{Eq.~(2)}}{\longrightarrow}\mathcal{M}(\vec{0})-\mathcal{M}(\mathbf{x_{i}})=\mathcal{M}(-\mathbf{x_{i}})-\mathcal{M}(\vec{0})}}\\ {{\mathrm{Eq.~(4)}}}\\ {{\times(-1)}}\end{array}\mathcal{M}(\mathbf{x_{i}})=-\mathcal{M}(-\mathbf{x_{i}}).$$
|
102 |
+
$$=\;m{\mathcal{M}}(x_{i})$$
|
103 |
+
|
104 |
+
- **Step 2**: Using *mathematical induction*, for arbitrary xi, we show that
|
105 |
+
|
106 |
+
$\varepsilon_{\mathrm{max}}$
|
107 |
+
∀m ∈ N
|
108 |
+
+, M(mxi) = mM(xi) (7)
|
109 |
+
holds, where N
|
110 |
+
+ is the set of positive natural numbers, as Base Case: Trivially holds when m = 1.
|
111 |
+
|
112 |
+
Inductive Step: Assume the inductive hypothesis that m = k (k ∈ N
|
113 |
+
+), i.e.,
|
114 |
+
|
115 |
+
$${\mathcal{M}}(k\mathbf{x_{i}})\,=\,k{\mathcal{M}}(\mathbf{x_{i}}).$$
|
116 |
+
M(kxi) = kM(xi). (8)
|
117 |
+
|
118 |
+
$$({\mathfrak{G}})$$
|
119 |
+
$$\left(7\right)$$
|
120 |
+
$$({\boldsymbol{\delta}})$$
|
121 |
+
|
122 |
+
Then, as required, when m = k + 1,
|
123 |
+
|
124 |
+
$\mathcal{M}\big{(}(k+1)\mathbf{x_{i}}\big{)}\ \xrightarrow{\text{Eq.\ ()}}\ \mathcal{M}(k\mathbf{x_{i}})+\mathcal{M}(\mathbf{x_{i}})\ \xrightarrow{\text{Eq.\ ()}}\ k\mathcal{M}(\mathbf{x_{i}})+\mathcal{M}(\mathbf{x_{i}})=(k+1)\mathcal{M}(\mathbf{x_{i}})$.
|
125 |
+
$$({\mathfrak{g}})$$
|
126 |
+
- **Step 3**: We further justify that
|
127 |
+
|
128 |
+
$$\forall n\ \in\mathbb{N}^{+},\ \mathcal{M}(\frac{\mathbf{x}_{i}}{n})\,=\,\frac{\mathcal{M}(\mathbf{x}_{i})}{n},$$ holds when $n=1$; as for $n>1$, $$\mathcal{M}(\mathbf{x}_{i})\,=\,\frac{\mathcal{M}(\mathbf{x}_{i})}{n},$$
|
129 |
+
which, due to Eq. (4), trivially holds when n = 1; as for n > 1,
|
130 |
+
$\mathcal{M}(\frac{\mathbf{x}_{i}}{n})=\mathcal{M}(\mathbf{x}_{i}+(-\frac{n-1}{n}\mathbf{x}_{i}))$$\frac{\mathbf{E}_{0}\ (3)}{n}$$\mathcal{M}(\mathbf{x}_{i})+\mathcal{M}(-\frac{n-1}{n}\mathbf{x}_{i})$$\frac{\mathbf{E}_{0}\ (6)}{n}$$\mathcal{M}(\mathbf{x}_{i})-\mathcal{M}(\frac{n-1}{n}\mathbf{x}_{i})$$\mathcal{M}(\mathbf{x}_{i})-(n-1)\mathcal{M}(\frac{\mathbf{x}_{i}}{n})$
|
131 |
+
directly yields M(
|
132 |
+
xi n
|
133 |
+
) = M(xi)
|
134 |
+
n, i.e., Eq. (9).
|
135 |
+
|
136 |
+
- **Step 4**: Considering the set of rational numbers Q = {0} ∪ {± m n |∀*m, n*}, Eqs. (4), (6), (7) and (9) jointly justifies the homogeneity of M for Q. Because Q ⊂ R is a *dense set*, homogeneity of M also holds over R,
|
137 |
+
see Kleiber & Pervin (1969).
|
138 |
+
|
139 |
+
Finally, combined with the additivity that has been already justified above, linearity of CLWE mapping M is proved, i.e., Eq. (2) =⇒ Eq. (3).
|
140 |
+
|
141 |
+
##### q.$\left(3\right)\implies$ Eq.
|
142 |
+
Proof: Eq. (3) =⇒ Eq. (2). Justifying this direction is quite straightforward:
|
143 |
+
|
144 |
+
$$\begin{split}\boldsymbol{x_{\alpha}-x_{\beta}=x_{\gamma}-x_{\theta}}&\Longrightarrow M\boldsymbol{x_{\alpha}-Mx_{\beta}=Mx_{\gamma}-Mx_{\theta}}\\ &\Longrightarrow M\boldsymbol{x_{\alpha}+b-(Mx_{\beta}+b)=Mx_{\gamma}+b-(Mx_{\theta}+b)}\\ &\Longrightarrow\mathcal{M}(\boldsymbol{x_{\alpha}})-\mathcal{M}(\boldsymbol{x_{\beta}})=\mathcal{M}(\boldsymbol{x_{\gamma}})-\mathcal{M}(\boldsymbol{x_{\theta}}).\end{split}$$
|
145 |
+
$\square$
|
146 |
+
Summarising the proofs for both the forward and reverse directions, we conclude that the proposition holds.
|
147 |
+
|
148 |
+
Please note, the high-level assumption of our derivations is that word embedding spaces can be treated as continuous vector spaces, an assumption commonly adopted in previous work, e.g., Levy & Goldberg
|
149 |
+
(2014b), Hashimoto et al. (2016), Zhang et al. (2018), and Ravfogel et al. (2020). Nevertheless, we argue that the inherent discreteness of word embeddings should not be ignored. The following sections complement this theoretical insight via experiments which confirm the claim holds empirically.
|
150 |
+
|
151 |
+
## 4 Experiment
|
152 |
+
|
153 |
+
Our experimental protocol assesses the linearity of the mapping between each pair of pre-trained monolingual word embeddings. We also quantify the extent to which this mapping preserves encoded analogies, i.e.,
|
154 |
+
satisfies the condition of Eq. (2). We then analyse the correlation between these two indicators. A strong correlation provides evidence to support our theory, and *vice versa*. The indicators used are described in
|
155 |
+
§ 4.1. Unfortunately, there are no suitable publicly available corpora for our proposed experiments, so we develop a novel word-level analogy test set that is fully parallel across languages, namely xANLG (see § 4.2).
|
156 |
+
|
157 |
+
The pre-trained embeddings used for the tests are described in § 4.3.
|
158 |
+
|
159 |
+
## 4.1 Indicators 4.1.1 Linearity Of Clwe Mapping
|
160 |
+
|
161 |
+
Direct measurement of the linearity of a ground-truth CLWE mapping is challenging. One relevant approach is to benchmark the similarity between multilingual word embedding, where the mainstream and state-ofthe-art indicators are the so-called spectral-based algorithms (Søgaard et al., 2018; Dubossarsky et al.,
|
162 |
+
2020). However, such methods assume the number of tested vectors to be much larger than the number of dimensions, which does not apply in our scenario (see § 4.2). Therefore, we choose to evaluate linearity via the goodness-of-fit of the optimal linear CLWE mapping, which is measured as
|
163 |
+
|
164 |
+
$\mathcal{S}_{\rm LMP}\coloneqq-||M^{\star}X-Y||_{F}/r\quad\mbox{with}\quad M^{\star}=\arg\min_{M}||MX-Y||_{F}$
|
165 |
+
where *|| · ||*F and r denotes the Frobenius norm and the number of X's rows. To obtain matrices X and Y , from X and Y respectively, we first retrieve the vectors corresponding to lexicons of a ground-truth LX-LY dictionary and concatenate them into two matrices. More specifically, if two vectors (represented as rows) share the same index in the two matrices (one for each language), their corresponding words form a translation pair, i.e., the rows of these matrices are aligned. "Mean centring" is applied to satisfy Eq. (4). For fair comparisons across different mapping pairs, in each of X and Y , rows are standardised by scaling the mean Euclidean norm to 1. Generic Procrustes Analysis (not necessarily orthogonal) (Bookstein, 1992)
|
166 |
+
is applied to find M⋆.
|
167 |
+
|
168 |
+
Large absolute values of SLMP mean that the optimal linear mapping is an accurate model of the true relationship between the embeddings, and *vice versa*. SLMP therefore indicates the degree to which CLWE
|
169 |
+
mappings are linear.
|
170 |
+
|
171 |
+
## 4.1.2 Preservation Of Analogy Encoding
|
172 |
+
|
173 |
+
To assess how well analogies are preserved across embeddings, we start by probing how analogies are encoded in the monolingual word embeddings. We use the set-based LRCos, the state-of-the-art analogy mining tool for static word embeddings (Drozd et al., 2016).3It provides a score in the range of 0 to 1, indicating the correctness of analogy completion in a single language. For the extension in a cross-lingual setup, we further compute the geometric mean:
|
174 |
+
|
175 |
+
$${\mathcal{S}}_{\mathrm{PAE}}:={\sqrt{\mathrm{LRCos}(\mathbf{X})\times\mathrm{LRCos}(\mathbf{Y})}},$$
|
176 |
+
|
177 |
+
where LRCos(·) is the accuracy of analogy completion provided by LRCos for embedding X. To simplify our discussion and analysis from now onward, when performing CLWE mappings, by default we select the monolingual embeddings that best encode analogy, i.e., we restrict LRCos(X) ≥ LRCos(Y). SPAE = 1 indicates all analogies are well encoded in both embeddings, and are preserved by the ground-truth mapping between them. On the other hand, lower SPAE values indicate deviation from the condition of Eq. (2).
|
178 |
+
|
179 |
+
## 4.1.3 Validity Of Spae
|
180 |
+
|
181 |
+
As an aide, we explore the properties of the SPAE indicator to demonstrate its robustness for the interested reader. The score produced by LRCos is relative to a pre-specified set of *known* analogies. In theory, a low LRCos(X) score may not reliably indicate that X does not encode analogies well since there may be other word pairings within that set that produce higher scores. This naturally raises a question: *does* SPAE really promise the validity as the indicator of analogy encoding preservation? In other words, it is necessary to investigate whether there exists an *unknown* analogy word set encoded by the tested embeddings to an equal or higher degree. If there is, then SPAE may not reflect the preservation of analogy encoding completely, as unmatched analogy test sets may lead to low LRCos scores even for monolingual embeddings that encode analogies well. We demonstrate that the problem can be considered as an optimal transportation task and SPAE is guaranteed to be a reliable indicator.
|
182 |
+
|
183 |
+
As analysed by Ethayarajh et al. (2019), the degree to which word pairs are encoded as analogies in word embeddings is equivalent to the likelihood that the end points of any two corresponding vector pairs form a high-dimensional coplanar parallelogram. More formally, this task is to identify
|
184 |
+
|
185 |
+
$$\mathbf{P}^{\star}=\arg\operatorname*{min}_{\mathbf{P}}\sum_{\mathbf{x}\in\mathbf{X}}c({\mathcal{T}}_{\mathbf{\bigtriangleup}}^{\mathbf{P}}(\mathbf{x})),$$
|
186 |
+
P (x), (10)
|
187 |
+
3We have tried alternatives including 3CosAdd (Mikolov et al., 2013a), PairDistance (Levy & Goldberg, 2014a) and 3CosMul (Levy et al., 2015), verifying that they are less accurate than LRCos in most cases. Still, in the experiments they all exhibit similar trends as shown in Tab. 2.
|
188 |
+
|
189 |
+
$$(10)$$
|
190 |
+
|
191 |
+
![6_image_0.png](6_image_0.png)
|
192 |
+
|
193 |
+
Figure 2: An example of solving T
|
194 |
+
P (·) in Eq. (11), with P = {(x1, x2),(x3, x4),(x5, x6),(x7, x8)}. In the figure we adjust the position of x1, x3, x5 and x7 in the last step, but it is worth noting that there also exists other feasible T
|
195 |
+
P (·) given p
|
196 |
+
⋆, e.g., to tune x2, x4, x6 and x8 instead.
|
197 |
+
where P is one possible pairing of vectors in X and C(·) is the cost of a given transportation scheme. T
|
198 |
+
P (·)
|
199 |
+
denotes the corresponding cost-optimal process of moving vectors to satisfy
|
200 |
+
|
201 |
+
∀{(xα, xβ),(xγ, xθ)} ⊆ P, T P (xα) − T P (xβ) = T P (xγ) − T P (xθ), (11)
|
202 |
+
$$(11)$$
|
203 |
+
i.e., the end points of $\mathcal{T}_{\not{\mathcal{D}}}^{\mathbf{P}}(\mathbf{x_{\alpha}})$, $\mathcal{T}_{\not{\mathcal{D}}}^{\mathbf{P}}(\mathbf{x_{\beta}})$, $\mathcal{T}_{\not{\mathcal{D}}}^{\mathbf{P}}(\mathbf{x_{\gamma}})$ and $\mathcal{T}_{\not{\mathcal{D}}}^{\mathbf{P}}(\mathbf{x_{\theta}})$ form a parallelogram.
|
204 |
+
Therefore, in each language and analogy category of xANLG, we first randomly sample vector pairing samples, leading to 1e5 different P. Next, for each of them, we need to obtain T
|
205 |
+
P (·) that minimises Px∈X CT
|
206 |
+
P (x)in Eq. (10). Our algorithm is explained using the example in Fig. 2, where the cardinality of X and P is 8 and 4, respectively.
|
207 |
+
|
208 |
+
- **Step 1**: Link the end points of the vectors within each word pair, hence our target is to adjust these end points so that all connecting lines not only have equal length but also remain parallel.
|
209 |
+
|
210 |
+
- **Step 2**: For each vector pair (xα, xβ) ∈ P, vectorise its connecting line into an offset vector as vα−β =
|
211 |
+
xα − xβ.
|
212 |
+
|
213 |
+
- **Step 3**: As the start points of all such offset vectors are aggregated at ⃗0, seek a vector p
|
214 |
+
⋆that minimises the total transportation cost between the end point of p
|
215 |
+
⋆ and those of all offset vectors (again, note they share a start point at ⃗0).
|
216 |
+
|
217 |
+
- **Step 4**: Perform the transportation so that all offset vectors become p
|
218 |
+
⋆, i.e.,
|
219 |
+
|
220 |
+
$$\forall(\mathbf{x_{\alpha}},\mathbf{x_{\beta}})\in\mathbf{P},\ {\mathcal{T}}_{\mathbf{\widetilde{\mathcal{D}}}}^{\mathbf{P}}(\mathbf{x_{\alpha}})-{\mathcal{T}}_{\mathbf{\widetilde{\mathcal{D}}}}^{\mathbf{P}}(\mathbf{x_{\beta}})=\mathbf{p^{\star}}.$$
|
221 |
+
|
222 |
+
In this way, the tuned vector pairs can always form perfect parallelograms. Obviously, as p
|
223 |
+
⋆is at the cost-optimal position (see Step 3), this vector-adjustment scheme is also cost-optimal. Solving p
|
224 |
+
⋆for high dimensions is non-trivial in real world and is a special case of the NP-hard Facility Location Problem (a.k.a. the P-Median Problem) (Kariv & Hakimi, 1979). We, therefore, use the scipy.optimize.fmin implementation of the Nelder-Mead simplex algorithm (Nelder & Mead, 1965) to provide a good-enough solution. To reach convergence, with the mean offset vector as the initial guess, we set both the absolute errors in parameter and function value between iterations at 1e4. We experimented with implementing C(·) using mean Euclidean, Taxicab and Cosine distances respectively. For all analogy categories in all languages, P⋆coincides perfectly with the pre-defined pairing of xANLG. This analysis provides evidence that the situation where *an unknown kind of analogy is better encoded than the ones used* does not occur in practice. SPAE is thus trustworthy.
|
225 |
+
|
226 |
+
## 4.2 Datasets
|
227 |
+
|
228 |
+
Calculating the correlation between SLMP and SPAE requires a cross-lingual word analogy dataset. This resource would allow us to simultaneously (1) construct two aligned matrices X and Y to check the linearity of CLWE mappings, and (2) obtain the monolingual LRCos scores of both X and Y. Three relevant resources were identified, although none of them is suitable for our study.
|
229 |
+
|
230 |
+
| Budapest | Budapest | Budapest | Budapeszt | | | | | |
|
231 |
+
|------------|-------------------------------------------------------------------------------------------|------------------|-------------|-------------|----------|---------|--------|--------|
|
232 |
+
| CAP† | 31 | Budapest Ungarn | Hungary | Hungría | Hongrie | Węgry | | |
|
233 |
+
| Category | # | ♣♤ | ♤♭ | ♤♲ | ♥♱ | ♧♨ | ♯♫ | |
|
234 |
+
| ♷ANLGG | son | hijo | fils | syn | | | | |
|
235 |
+
| GNDR† | 30 | sohn tochter | daughter | hija | fille | córka | | |
|
236 |
+
| | बुडापे⡰ट हंगरी बेटा बेटी पे⣶ पे⣶ ब⡜चा ब⡜चे बुडापे⡰ट हंगरी बेटा बेटी पे⣶ पे⣶ ब⡜चा ब⡜चे बुडापे⡰ट हंगरी बेटा बेटी पे⣶ पे⣶ ब⡜चा ब⡜चे | | | | | | | |
|
237 |
+
| Peru | Perú | Pérou | Peru | | | | | |
|
238 |
+
| NATL† | 34 | Peru | | | | | | |
|
239 |
+
| Peruanisch | Peruvian | Peruano | Péruvien | Peruwiański | | | | |
|
240 |
+
| G-PL‡ | 31 | kind | child | niño | enfant | dziecko | | |
|
241 |
+
| kinder | children | niños | enfants | dzieci | | | | |
|
242 |
+
| Category | # | ♤♭ | ♤♳ | ♥♨ | ♧♱ | ♫♵ | ♱♴ | ♲♫ |
|
243 |
+
| M ♷ANLG | kotkas | kotka | orao | ērglis | орёл | orel | | |
|
244 |
+
| ANIM† | 32 | eagle bird | lind | lintu | ptica | putns | птица | ptica |
|
245 |
+
| masin | kone | stroj | mašīna | машина | stroj | | | |
|
246 |
+
| G-PL‡ | 31 | machine machines | masinad | koneet | strojevi | mašīnas | машины | stroji |
|
247 |
+
|
248 |
+
Table 1: Summary of and examples from the xANLG corpus. \# denotes the number of cross-lingual analogy word pairs in each language. †Semantic: animal-species|ANIM, capital-world|CAP, male-female|GNDR,
|
249 |
+
nation-nationality|NATL.
|
250 |
+
|
251 |
+
‡Syntactic: grammar-plural|G-PL.
|
252 |
+
|
253 |
+
Table 1: Results
|
254 |
+
- Brychcín et al. (2019) described a cross-lingual analogy dataset consisting of word pairs from six closely related European languages, but it has never been made publicly available.
|
255 |
+
|
256 |
+
- Ulčar et al. (2020) open-sourced the MCIWAD dataset for nine languages, but the analogy words in different languages are not parallel4.
|
257 |
+
|
258 |
+
- Garneau et al. (2021) produced the cross-lingual WiQueen dataset. Unfortunately, a large part of its entries are proper nouns or multi-word terms instead of single-item words, leading to low coverage on the vocabularies of embeddings.
|
259 |
+
|
260 |
+
Consequently, we develop xANLG, which we believe to be the first (publicly available) cross-lingual word analogy corpus. For consistency with previous work, xANLG is bootstrapped using established monolingual analogies and cross-lingual dictionaries. xANLG is constructed by starting with a *bilingual* analogy dataset, say, that for LX and LY. Within each analogy category, we first translate word pairs of the LX analogy corpus into LY, using an available LX-LY dictionary. Next, we check if any translation coincides with its original word pair in LY. If it does, such a word pair (in both LX and LY) will be added into the bilingual dataset. This process is repeated for multiple languages to form a cross-lingual corpus.
|
261 |
+
|
262 |
+
We use the popular MUSE dictionary (Lample et al., 2018a) which contains a wide range of language pairs.
|
263 |
+
|
264 |
+
Two existing collections of analogies are utilised:
|
265 |
+
- **Google Analogy Test Set (GATS)** (Mikolov et al., 2013c), the *de facto* standard benchmark of embedding-based analogy solving. We adopt its extended English version, Bigger Analogy Test Set
|
266 |
+
(BATS) (Gladkova et al., 2016), supplemented with several datasets in other languages inspired by the original GATS: French, Hindi and Polish (Grave et al., 2018), German (Köper et al., 2015) and Spanish (Cardellino, 2019).
|
267 |
+
|
268 |
+
- The aforementioned Multilingual Culture-Independent Word Analogy Datasets
|
269 |
+
(MCIWAD) (Ulčar et al., 2020).
|
270 |
+
|
271 |
+
Due to the differing characteristics of these datasets (e.g., the composition of analogy categories), they are used to produce two separate corpora: xANLGG and xANLGM. Only categories containing at least 30 word pairs aligned across all languages in the dataset were included. For comparison, 60% of the semantic analogy categories in the commonly used GATS dataset contains fewer than 30 word pairs. The rationale for selecting this value was that it allows a reasonable number of analogy completion questions to be generated.5 Information in xANLGG and xANLGM for the capital-country of Hindi was supplemented with manual 4Personal communication with the authors. 530 word pairs can be used to generate as many as 3480 unique analogy completion questions such as "king:man :: *queen*:?"
|
272 |
+
(see Appendix A).
|
273 |
+
|
274 |
+
EMNLP 2020 Submission ***. Confidential Review Copy. DO NOT DISTRIBUTE.
|
275 |
+
|
276 |
+
000 050 001 051 002 052 003 053 004 054 005 055 006 056 007 057 008 058 009 059 010 060 011 061 012 062 013 063 014 064 015 065 016 066 017 067 018 068 019 069 020 070 021 071 022 072 023 073 024 074 025 075 026 076 027 077 028 078 029 079 030 080 031 081 032 082 033 083 034 084 035 085 036 086 037 087 038 088 039 089 040 090 041 091 042 092 043 093 044 094 045 095 046 096 047 097 048 098 049 099 1
|
277 |
+
translations by native speakers. In addition, each analogy included in the data set was checked by at least one fluent speaker of the relevant language to ensure that they are valid.
|
278 |
+
|
279 |
+
The xANLG dataset contains five distinct analogy categories, including both syntactic (morphological) and semantic analogies, and twelve languages from a diverse range of families (see Tab. 1). From Indo-European languages, one belongs to the Indo-Aryan branch (Hindi|hi), one to the Baltic branch (Latvian|lv), two to the Germanic branch (English|en, German|de), two to the Romance branch (French|fr, Spanish|es) and four to the Slavonic branch (Croatian|hr, Polish|pl, Russian|ru, Slovene|sl). Two non-Indo-European languages, Estonian|et and Finnish|fi, both from the Finnic branch of the Uralic family, are also included. In total, they form 15 and 21 languages pairs for xANLGG and xANLGM, respectively. These pairs span multiple etymological combinations, i.e., intra-language-branch (e.g., es-fr), inter-language-branch (e.g., de-ru) and inter-language-family (e.g., hi-et).
|
280 |
+
|
281 |
+
## 4.3 Word Embeddings
|
282 |
+
|
283 |
+
To cover the language pairs used in xANLG, we make use of static word embeddings pre-trained on the twelve languages used in the resource. These embeddings consist of three representative open-source series that employ different training corpora, are based on different embedding algorithms, and have different vector dimensions.
|
284 |
+
|
285 |
+
- **Wiki**6: 300-dimensional, trained on Wikipedia using the Skip-Gram version of FastText (refer to Bojanowski et al. (2017) for details).
|
286 |
+
|
287 |
+
- **Crawl**7: 300-dimensional, trained on CommonCrawl plus Wikipedia using FastText-CBOW.
|
288 |
+
|
289 |
+
- **CoNLL**8: 100-dimensional, trained on the CoNLL corpus (without lemmatisation) using Word2Vec (Mikolov et al., 2013c).
|
290 |
+
|
291 |
+
## 5 Result
|
292 |
+
|
293 |
+
Both Spearman's rank-order (ρ) and Pearson product-moment (r) correlation coefficients are computed to measure the correlation between SLMP and SPAE. Note that, it is not possible to compute the correlations between all pairs due to (1) the number of dimensions varies across embeddings series, and (2) the source and target embeddings have been pre-processed independently for different mappings. Instead, results are grouped by embedding method and analogy category.
|
294 |
+
|
295 |
+
Figures in Tab. 2 show that a significant positive correlation between SPAE and SLMP is observed for all setups. In terms of the Spearman's ρ, among the 18 groups, 5 exhibit *very strong* correlation (ρ ≥ 0.80) (with a maximum at 0.96 for CoNLL embeddings on CAP of xANLGG), 4 show *strong* correlation (0.80 > ρ ≥ 0.70),
|
296 |
+
and the others have *moderate* correlation (0.70 > ρ ≥ 0.50) (with a minimum at 0.58: CoNLL embeddings on ANIM and G-PL of xANLGM). Interestingly, although we do not assume a linear relationship in § 3, large values for the Pearson's r are obtained in practice. To be exact, 4 groups indicate very strong correlation, 6 have strong correlation, while others retain moderate correlation (the minimum r value is 0.58: Wiki embeddings on CAP and G-PL of xANLGG). These results provide empirical evidence that supplements our theoretical analysis (§ 3) of the relationship between linearity of mappings and analogy preservation.
|
297 |
+
|
298 |
+
In addition, we explored whether the analogy type (i.e., semantic or syntactic) affects the correlation. To bootstrap the analysis, for both kinds of correlation coefficients, we divide the 18 experiment groups into two splits, i.e., 12 semantic ones and 6 syntactic ones. After that, we compute a two-treatment ANOVA (Fisher, 1925). For both Spearman's ρ and Pearson's r, the results are not significant at p < 0.1. Therefore, we conclude that the connection between CLWE mapping linearity and analogy encoding preservation holds across analogy types. We thus recommend testing SPAE *before* implementing CLWE alignment as an indicator of whether a linear transformation is a good approximation of the ground-truth CLWE mapping.
|
299 |
+
|
300 |
+
6https://fasttext.cc/docs/en/pretrained-vectors.html 7https://fasttext.cc/docs/en/crawl-vectors.html 8http://vectors.nlpl.eu/repository/
|
301 |
+
|
302 |
+
![9_image_0.png](9_image_0.png)
|
303 |
+
|
304 |
+
Table 2: Correlation coefficients (Spearman's ρ and Pearson's r) between SLMP and SPAE. For all groups, we conduct significance tests to estimate the p-value. Empirically, the p-value is always less than 1e-2 (in most groups it is even less than 1e-3), indicating a very high confidence level for the experiment results. To facilitate future research and analyses, we present the raw SLMP and LRCos data in Appendix B.
|
305 |
+
Although there are strong correlations between the measures, they are not perfect. We therefore carried out further investigation into the data points in Tab. 2 that do not follow the overall trend. Firstly, we identified that some are associated with "crowded" embedding regions, in which the correct answer to an analogy question is not ranked highest by LRCos but the top candidate is a polysemous term (Rogers et al., 2017).
|
306 |
+
|
307 |
+
One example is the LRCos score of the CAP analogy for pl's Wiki embeddings, which was underestimated.
|
308 |
+
|
309 |
+
If we consider the three highest ranked terms, rather than only the top term, then the overall ρ and r of
|
310 |
+
"Wiki: CAP" (the first cell in Tab. 2) will increase sharply to 0.79 and 0.76, respectively.
|
311 |
+
|
312 |
+
Secondly, we noticed the in certain cases the source and target vectors of a word pair are too close (i.e. the distance between them is near zero). This phenomenon introduces noise to the results of analogy metrics such as LRCos (Linzen, 2016; Bolukbasi et al., 2016), and consequently, impact SPAE. For example, the mean cosine distance between G-PL pairs is smaller in xANLGM (0.18) than xANLGG (0.24). Therefore, the SPAE for G-PL is less reliable for xANLGM than xANLGG, leading to a lower correlation.
|
313 |
+
|
314 |
+
## 6 Application: Predicting Relationship Between Monolingual Word Embeddings
|
315 |
+
|
316 |
+
As discussed in § 2, in many scenarios linear CLWE mappings outperform their nonlinear counterparts, while in other setups nonlinear CLWE mappings are more successful. Therefore, an indicator that predicts the relationship between independently pre-trained monolingual word embedding which helps decide whether to
|
317 |
+
|
318 |
+
| CLWE method | CCA | Proc | Proc-B | | DLV | RCSLS | S¯ PAE | | | | | | | | |
|
319 |
+
|-----------------|-------|--------|----------|-----|-------|---------|----------|-----|-----|-----|-----|-----|-----|-----|-----|
|
320 |
+
| Seed dict. size | 1K | 3K | 5K | 1K | 3K | 5K | 1K | 3K | 1K | 3K | 5K | 1K | 3K | 5K | |
|
321 |
+
| en-fi | .26 | .35 | .38 | .27 | .37 | .40 | .36 | .38 | .27 | .37 | .40 | .31 | .40 | .44 | .41 |
|
322 |
+
| en-hr | .22 | .30 | .33 | .23 | .31 | .34 | .30 | .34 | .23 | .31 | .33 | .27 | .36 | .38 | .32 |
|
323 |
+
| en-ru | .34 | .43 | .45 | .35 | .45 | .46 | .42 | .45 | .35 | .44 | .47 | .40 | .49 | .51 | .46 |
|
324 |
+
| fi -hr | .17 | .26 | .29 | .19 | .27 | .29 | .26 | .29 | .18 | .27 | .29 | .21 | .30 | .32 | .23 |
|
325 |
+
| fi -ru | .21 | .31 | .34 | .23 | .31 | .34 | .32 | .33 | .23 | .31 | .34 | .26 | .34 | .38 | .33 |
|
326 |
+
| hr-ru | .26 | .35 | .37 | .27 | .35 | .37 | .35 | .37 | .26 | .35 | .37 | .29 | .38 | .40 | .26 |
|
327 |
+
| Spearman's ρ | .83 | .82 | .86 | .83 | .84 | .88 | .83 | .86 | .84 | .84 | .87 | .87 | .88 | .90 | |
|
328 |
+
|
329 |
+
Table 3: Spearman's ρ between the Word Translation performance (MRR) of linear-mapping-based CLWE
|
330 |
+
methods (from Glavaš et al. (2019); Proc-B's performance with 5K seed dictionary was not available) and the average analogy encoding preservation score (S¯PAE).
|
331 |
+
|
332 |
+
use linear or non-linear mappings without training actual CLWEs, would be beneficial. Use of this indicator has the potential to reduce the resources required to find optimal CLWEs (e.g., some recent approaches need several hours of processing on modern GPUs (Peng et al., 2021a; Ormazabal et al., 2021)), with corresponding reductions in carbon footprint.
|
333 |
+
|
334 |
+
The proposed SPAE metric, which can be obtained within several minutes on a single CPU, can be leveraged as such a metric. A high SPAE score suggests that the linear assumption holds strongly on the ground-truth CLWE mapping, so it is feasible to train a linear CLWE mapping; otherwise, the non-linear approaches are recommended. To demonstrate this idea in practice, we revisited a systematic evaluation on CLWE models based on linear mappings (Glavaš et al., 2019), which reported Mean Reciprocal Rank (MRR) of five representative linearmapping-based CLWE approaches on the Word Translation task (the de facto stadard for CLWEs). We focus on six language pairs (en-fi, en-hr, en-ru, fi-hr, fi-ru, hr-ru) as they are covered by both xANLGM
|
335 |
+
and the dataset of Glavaš et al. (2019). Additionally, only Wiki embeddings were involved in the experiments of Glavaš et al. (2019). Thus, for each language pair, we aggregated SPAE of different analogy categories for Wiki embeddings, then calculated the average, S¯PAE.
|
336 |
+
|
337 |
+
Results are shown in Tab. 3, where the Spearman's ρ between S¯PAE and Word Translation performance is highlighted. Strong positive correlations are observed in all setups that were tested. These results demonstrate that S¯PAE provides as accurate indication of the real-world performance of linear CLWE mappings, regardless of the language pair, mapping algorithm, or level of supervision (i.e., size of the seed dictionary for training). These results also provide solid support to the main statement of our paper, i.e., the ground-truth CLWE mapping between monolingual word embeddings is linear iff analogies encoded in those embeddings are preserved.
|
338 |
+
|
339 |
+
## 7 Further Discussion
|
340 |
+
|
341 |
+
Prior work relevant to the linearity of CLWE mappings has largely been observational (see § 2). This section sheds new light on these past studies from the novel perspective of word analogies. Explaining Non-Linearity. We provide three suggested reasons why CLWE mappings are sometimes not approximately linear, all linked with the condition of Eq. (2) not being met.
|
342 |
+
|
343 |
+
The first may be issues with individual monolingual embeddings (see one such example in the upper part of Fig. 3). In particular, popular word embedding algorithms lack the capacity to ensure semantic continuity over the entire embedding space (Linzen, 2016). Hence, vectors for the analogy words may only exhibit local consistency, with Eq. (2) breaking down for relatively distant regions. This caused the locality of linearity that has been reported by Nakashole & Flauger (2018), Li et al. (2021) and Wang et al. (2021a).
|
344 |
+
|
345 |
+
![11_image_0.png](11_image_0.png)
|
346 |
+
|
347 |
+
Figure 3: Illustration of example scenarios where the CLWE mapping is non-linear. Translations of English
|
348 |
+
(left) and Chinese (right) terms are indicated by shared symbols. **Upper**: The vector for "*blueberry*"
|
349 |
+
(shadowed) is ill-positioned in the embedding space, so the condition of Eq. (2) is no longer satisfied. **Lower**:
|
350 |
+
In the financial domain some Eastern countries (e.g., China and Japan) traditionally use "*black*" to indicate growth and "*green*" for reduction, while Western countries (e.g., US and UK) assign the opposite meanings to these terms, also not satisfying the condition of Eq. (2). The second reason why a CLWE mapping may not be linear is semantic gaps. Despite analogies in our xANLG corpus all are language-agnostic, the analogical relations between words may change or even disappear sometimes. For example, languages pairs may have very different grammars, e.g., Chinese does have the plural morphology (Li & Thompson, 1989), so some types of analogy, e.g. G-PL used above, do not hold.
|
351 |
+
|
352 |
+
Also, analogies may evolve differently across cultures, (see example in the lower part of Fig. 3). These two factors go some way to explain why typologically and etymologically distant language pairs tend to have worse alignment (Ruder et al., 2019).
|
353 |
+
|
354 |
+
Thirdly, many studies point out that differences in the domain of training data can influence the similarity between multilingual word embeddings (Søgaard et al., 2018; Artetxe et al., 2018). Besides, we argue that due to polysemy, analogies may change from one domain to another. Under such circumstances, Eq. (2) is violated and the linear assumption no longer holds. Mitigating Non-Linearity. The proposed analogy-inspired framework justifies the success and failure of the linearity assumption for CLWEs. As discussed earlier, it also suggests a method for indirectly assessing the linearity of a CLWE mapping prior to implementation. Moreover, it offers principled methods for designing more effective CLWE methods. The most straightforward idea is to explicitly use Eq. (2) as a training constraint, which has very recently been practised by Garneau et al. (2021)9. Based on analogy pairs retrieved from external knowledge bases for different languages, their approach directly learnt to better encode monolingual analogies, particularly those whose vectors are distant in the embedding space. It not only works well on static word embeddings, but also leads to performance gain for large-scale pretrained cross-lingual language models including the multilingual BERT (Devlin et al., 2019). These results on multiple tasks (e.g., bilingual lexicon induction and cross-lingual sentence retrieval) can be seen as an independent confirmation of this paper's main claim and demonstration of its usefulness.
|
355 |
+
|
356 |
+
Our study also suggests another unexplored direction: incorporating analogy-based information into nonlinear CLWE mappings. Existing work has already introduced non-linearity to CLWE mappings by applying a variety of techniques including directly training non-linear functions (Mohiuddin et al., 2020), tuning linear mappings for outstanding non-isomorphic instances (Glavaš & Vulić, 2020) and learning multiple linear CLWE mappings instead of a single one (Nakashole, 2018; Wang et al., 2021a) (see § 2). However, there is a lack of theoretical motivation for decisions about how the non-linear mapping should be modelled.
|
357 |
+
|
358 |
+
Nevertheless, the results presented here suggest that ensembles of linear transformations, covering analogy preserving regions of the embedding space, would make a reasonable approximation of the ground-truth CLWE mappings and that information about analogy preservation could be used to partition embedding
|
359 |
+
|
360 |
+
9They cited our earlier preprint as the primary motivation for their approach.
|
361 |
+
spaces into multiple regions, between which independent linear mappings can be learnt. We leave this application as our important future work.
|
362 |
+
|
363 |
+
## 8 Conclusion And Future Work
|
364 |
+
|
365 |
+
This paper makes the first attempt to explore the conditions under which CLWE mappings are linear. Theoretically, we show that this widely-adopted assumption holds iff the analogies encoded are preserved across embeddings for different languages. We describe the construction of a novel cross-lingual word analogy dataset for a diverse range of languages and analogy categories and we propose indicators to quantify linearity and analogy preservation. Experiment results on three distinct embedding series firmly support our hypothesis. We also demonstrate how our insight into the connection between linearity and analogy preservation can be used to better understand past observations about the limitations of linear CLWE mappings, particularly when they are ineffective. Our findings regarding the preservation of analogy encoding provide a test that can be applied to determine the likely success of any attempt to create linear mappings between multilingual embeddings. We hope this study can guide future studies in the CLWE field.
|
366 |
+
|
367 |
+
Additionally, we plan to expand our theoretical insight to contextual embeddings, inspired by Garneau et al.
|
368 |
+
|
369 |
+
(2021) who demonstrated that developing mappings that preserve encoded analogies benefits pre-trained cross-lingual language models as well. We also aim to enrich xANLG by including new languages and analogies to enable explorations at an even larger scale. Finally, we will further design CLWE approaches that learn multiple linear mappings between local embedding regions outlined with analogy-based metrics
|
370 |
+
(see § 7).
|
371 |
+
|
372 |
+
## Broader Impact Statement
|
373 |
+
|
374 |
+
CLWE bridges the gap between languages and is efficient enough to be applied in situations where limited resources are available, including to endangered languages (Zhang et al., 2020; Ngoc Le & Sadat, 2020). This paper presented a theoretical analysis of the mechanisms underlying CLWE techniques which has potential to improve these methods. Moreover, the proposed SPAE metric predicts whether monolingual word embeddings in different languages should be aligned using a linear or non-linear mapping, without actually training the CLWEs. This indicator lowers the computational expense required to identify a suitable mapping approach, thereby reducing the computational power needed and negative environmental effects.
|
375 |
+
|
376 |
+
Our analysis relies on the use of analogies and previous work has indicated that these may contain biases, e.g., regarding gender (Bolukbasi et al., 2016; Sun et al., 2019). Any future work that incorporates analogies within the CLWE process should be aware of the potential consequences of any biases that may be contained within the analogies used. On the other hand, there is potential for the findings of this work to be leveraged for bias alleviation in cross-lingual representation learning.
|
377 |
+
|
378 |
+
## Acknowledgements
|
379 |
+
|
380 |
+
We would like to express our sincerest gratitude to all volunteers from Beijing Foreign Studies University who manually annotated and validated the xANLG corpus, as well as Guowei Zhang, Guanyi Chen, Ruizhe Li, Alison Sneyd, and Harish Tayyar Madabushi who helped this study. We also thank the official TMLR reviewers for their insightful comments and Angeliki Lazaridou for the action editing.
|
381 |
+
|
382 |
+
## References
|
383 |
+
|
384 |
+
Carl Allen and Timothy Hospedales. Analogies explained: Towards understanding word embeddings. In *Proceedings of the 36th International Conference on Machine Learning*, pp. 223–231, Long Beach, California, USA, 2019. PMLR. URL http://proceedings.mlr.press/v97/allen19a.html.
|
385 |
+
|
386 |
+
Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. A latent variable model approach to PMI-based word embeddings. *Transactions of the Association for Computational Linguistics*, 4:385–399, 2016. doi: 10.1162/tacl_a_00106. URL https://www.aclweb.org/anthology/Q16-1028.
|
387 |
+
|
388 |
+
Mikel Artetxe, Gorka Labaka, and Eneko Agirre. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2289–2294, Austin, Texas, November 2016. Association for Computational Linguistics. doi: 10.18653/v1/D16-1250. URL https://www.aclweb.org/anthology/D16-1250.
|
389 |
+
|
390 |
+
Mikel Artetxe, Gorka Labaka, and Eneko Agirre. Learning bilingual word embeddings with (almost) no bilingual data. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pp. 451–462, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1042. URL https://www.aclweb.org/anthology/P17-1042.
|
391 |
+
|
392 |
+
Mikel Artetxe, Gorka Labaka, and Eneko Agirre. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In *Proceedings of the 56th Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pp. 789–798, Melbourne, Australia, July 2018.
|
393 |
+
|
394 |
+
Association for Computational Linguistics. doi: 10.18653/v1/P18-1073. URL https://www.aclweb.org/
|
395 |
+
anthology/P18-1073.
|
396 |
+
|
397 |
+
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. *Transactions of the Association for Computational Linguistics*, 5:135–146, 2017. doi: 10.1162/
|
398 |
+
tacl_a_00051. URL https://aclanthology.org/Q17-1010.
|
399 |
+
|
400 |
+
Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In *Proceedings of the 30th International Conference on Neural Information Processing Systems*, NIPS'16, pp. 4356–4364, Red Hook, NY,
|
401 |
+
USA, 2016. Curran Associates Inc. ISBN 9781510838819.
|
402 |
+
|
403 |
+
Fred L. Bookstein. *Morphometric Tools for Landmark Data: Geometry and Biology*. Cambridge University Press, 1992. doi: 10.1017/CBO9780511573064.
|
404 |
+
|
405 |
+
Tomáš Brychcín, Stephen Taylor, and Lukáš Svoboda. Cross-lingual word analogies using linear transformations between semantic spaces. *Expert Systems with Applications*, 135:287 - 295, 2019. ISSN 09574174. doi: https://doi.org/10.1016/j.eswa.2019.06.021. URL http://www.sciencedirect.com/science/ article/pii/S0957417419304191.
|
406 |
+
|
407 |
+
Cristian Cardellino. Spanish Billion Words Corpus and Embeddings, August 2019. URL https://
|
408 |
+
crscardellino.github.io/SBWCE/.
|
409 |
+
|
410 |
+
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pp. 8440–8451, Online, July 2020. Association for Computational Linguistics. doi:
|
411 |
+
10.18653/v1/2020.acl-main.747. URL https://aclanthology.org/2020.acl-main.747.
|
412 |
+
|
413 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423.
|
414 |
+
|
415 |
+
Aleksandr Drozd, Anna Gladkova, and Satoshi Matsuoka. Word embeddings, analogies, and machine learning: Beyond king - man + woman = queen. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pp. 3519–3530, Osaka, Japan, December 2016. The COLING 2016 Organizing Committee. URL https://www.aclweb.org/anthology/C16-1332.
|
416 |
+
|
417 |
+
Haim Dubossarsky, Ivan Vulić, Roi Reichart, and Anna Korhonen. The secret is in the spectra: Predicting cross-lingual task performance with spectral similarity measures. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2377–2390, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.186. URL
|
418 |
+
https://aclanthology.org/2020.emnlp-main.186.
|
419 |
+
|
420 |
+
Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. Towards understanding linear word analogies. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3253–3262, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1315. URL
|
421 |
+
https://www.aclweb.org/anthology/P19-1315.
|
422 |
+
|
423 |
+
Manaal Faruqui and Chris Dyer. Improving vector space word representations using multilingual correlation.
|
424 |
+
|
425 |
+
In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pp. 462–471, Gothenburg, Sweden, April 2014. Association for Computational Linguistics.
|
426 |
+
|
427 |
+
doi: 10.3115/v1/E14-1049. URL https://www.aclweb.org/anthology/E14-1049.
|
428 |
+
|
429 |
+
R.A. Fisher. *Statistical methods for research workers*. Edinburgh Oliver & Boyd, 1925. URL http://
|
430 |
+
psychclassics.yorku.ca/Fisher/Methods/.
|
431 |
+
|
432 |
+
Louis Fournier and Ewan Dunbar. Paraphrases do not explain word analogies. In *Proceedings of the 16th* Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pp. 2129–2134, Online, April 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.
|
433 |
+
|
434 |
+
eacl-main.182. URL https://aclanthology.org/2021.eacl-main.182.
|
435 |
+
|
436 |
+
Ashwinkumar Ganesan, Francis Ferraro, and Tim Oates. Learning a reversible embedding mapping using bi-directional manifold alignment. In *Findings of the Association for Computational Linguistics: ACLIJCNLP 2021*, pp. 3132–3139, Online, August 2021. Association for Computational Linguistics. doi:
|
437 |
+
10.18653/v1/2021.findings-acl.276. URL https://aclanthology.org/2021.findings-acl.276.
|
438 |
+
|
439 |
+
Nicolas Garneau, Mareike Hartmann, Anders Sandholm, Sebastian Ruder, Ivan Vulić, and Anders Søgaard.
|
440 |
+
|
441 |
+
Analogy training multilingual encoders. *Proceedings of the AAAI Conference on Artificial Intelligence*, 35
|
442 |
+
(14):12884–12892, May 2021. URL https://ojs.aaai.org/index.php/AAAI/article/view/17524.
|
443 |
+
|
444 |
+
Dedre Gentner. Structure-mapping: A theoretical framework for analogy. *Cognitive Science*, 7(2):155–
|
445 |
+
170, 1983. ISSN 0364-0213. doi: https://doi.org/10.1016/S0364-0213(83)80009-3. URL https://www.
|
446 |
+
|
447 |
+
sciencedirect.com/science/article/pii/S0364021383800093.
|
448 |
+
|
449 |
+
Alex Gittens, Dimitris Achlioptas, and Michael W. Mahoney. Skip-Gram - Zipf + Uniform = Vector Additivity. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume* 1: Long Papers), pp. 69–76, Vancouver, Canada, July 2017. Association for Computational Linguistics.
|
450 |
+
|
451 |
+
doi: 10.18653/v1/P17-1007. URL https://www.aclweb.org/anthology/P17-1007.
|
452 |
+
|
453 |
+
Anna Gladkova, Aleksandr Drozd, and Satoshi Matsuoka. Analogy-based detection of morphological and semantic relations with word embeddings: what works and what doesn't. In *Proceedings of the NAACL*
|
454 |
+
Student Research Workshop, pp. 8–15, San Diego, California, June 2016. Association for Computational Linguistics. doi: 10.18653/v1/N16-2002. URL https://aclanthology.org/N16-2002.
|
455 |
+
|
456 |
+
Goran Glavaš and Ivan Vulić. Non-linear instance-based cross-lingual mapping for non-isomorphic embedding spaces. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pp.
|
457 |
+
|
458 |
+
7548–7555, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.
|
459 |
+
|
460 |
+
675. URL https://aclanthology.org/2020.acl-main.675.
|
461 |
+
|
462 |
+
Goran Glavaš, Robert Litschko, Sebastian Ruder, and Ivan Vulić. How to (properly) evaluate cross-lingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. In *Proceedings* of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 710–721, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1070. URL https: //aclanthology.org/P19-1070.
|
463 |
+
|
464 |
+
Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. Learning word vectors for 157 languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, May 2018. European Language Resources Association
|
465 |
+
(ELRA). URL https://www.aclweb.org/anthology/L18-1550.
|
466 |
+
|
467 |
+
Tatsunori B. Hashimoto, David Alvarez-Melis, and Tommi S. Jaakkola. Word embeddings as metric recovery in semantic spaces. *Transactions of the Association for Computational Linguistics*, 4:273–286, 2016. doi:
|
468 |
+
10.1162/tacl_a_00098. URL https://aclanthology.org/Q16-1020.
|
469 |
+
|
470 |
+
Christian Herold, Jan Rosendahl, Joris Vanvinckenroye, and Hermann Ney. Data filtering using crosslingual word embeddings. In *Proceedings of the 2021 Conference of the North American Chapter of the* Association for Computational Linguistics: Human Language Technologies, pp. 162–172, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.15. URL https:
|
471 |
+
//aclanthology.org/2021.naacl-main.15.
|
472 |
+
|
473 |
+
O. Kariv and S. L. Hakimi. An algorithmic approach to network location problems. II: The P-Medians.
|
474 |
+
|
475 |
+
SIAM Journal on Applied Mathematics, 37(3):539–560, 1979. doi: 10.1137/0137041. URL https://doi.
|
476 |
+
|
477 |
+
org/10.1137/0137041.
|
478 |
+
|
479 |
+
Yunsu Kim, Miguel Graça, and Hermann Ney. When and why is unsupervised neural machine translation useless? In *Proceedings of the 22nd Annual Conference of the European Association for Machine Translation*, pp. 35–44, Lisboa, Portugal, November 2020. European Association for Machine Translation. URL
|
480 |
+
https://www.aclweb.org/anthology/2020.eamt-1.5.
|
481 |
+
|
482 |
+
Martin Kleiber and W. J. Pervin. A generalized Banach-Mazur theorem. *Bulletin of the Australian Mathematical Society*, 1(2):169–173, 1969. doi: 10.1017/S0004972700041411.
|
483 |
+
|
484 |
+
Maximilian Köper, Christian Scheible, and Sabine Schulte im Walde. Multilingual reliability and "semantic" structure of continuous word spaces. In *Proceedings of the 11th International Conference on Computational* Semantics, pp. 40–45, London, UK, April 2015. Association for Computational Linguistics. URL https:
|
485 |
+
//www.aclweb.org/anthology/W15-0105.
|
486 |
+
|
487 |
+
Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. Advances in Neural Information Processing Systems (NeurIPS), 2019.
|
488 |
+
|
489 |
+
Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. Unsupervised machine translation using monolingual corpora only. In *International Conference on Learning Representations*,
|
490 |
+
2018a. URL https://openreview.net/forum?id=rkYTTf-AZ.
|
491 |
+
|
492 |
+
Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. Word translation without parallel data. In *International Conference on Learning Representations*, 2018b. URL
|
493 |
+
https://openreview.net/forum?id=H196sainb.
|
494 |
+
|
495 |
+
Omer Levy and Yoav Goldberg. Linguistic regularities in sparse and explicit word representations. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pp. 171–180, Ann Arbor, Michigan, June 2014a. Association for Computational Linguistics. doi: 10.3115/v1/W14-1618.
|
496 |
+
|
497 |
+
URL https://www.aclweb.org/anthology/W14-1618.
|
498 |
+
|
499 |
+
Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Q. Weinberger (eds.), *Advances in Neural Information* Processing Systems, volume 27. Curran Associates, Inc., 2014b. URL https://proceedings.neurips. cc/paper/2014/file/feab05aa91085b7a8012516bc3533958-Paper.pdf.
|
500 |
+
|
501 |
+
Omer Levy, Yoav Goldberg, and Ido Dagan. Improving distributional similarity with lessons learned from word embeddings. *Transactions of the Association for Computational Linguistics*, 3:211–225, 2015. doi:
|
502 |
+
10.1162/tacl_a_00134. URL https://aclanthology.org/Q15-1016.
|
503 |
+
|
504 |
+
Charles N Li and Sandra A Thompson. *Mandarin Chinese: A functional reference grammar*, volume 3. Univ of California Press, 1989.
|
505 |
+
|
506 |
+
Yuling Li, Kui Yu, and Yuhong Zhang. Learning cross-lingual mappings in imperfectly isomorphic embedding spaces. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, 29:2630–2642, 2021. doi: 10.1109/TASLP.2021.3097935.
|
507 |
+
|
508 |
+
Tal Linzen. Issues in evaluating semantic spaces using word analogies. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pp. 13–18, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/W16-2503. URL https://www.aclweb.org/anthology/
|
509 |
+
W16-2503.
|
510 |
+
|
511 |
+
Noa Yehezkel Lubin, Jacob Goldberger, and Yoav Goldberg. Aligning vector-spaces with noisy supervised lexicon. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 460–
|
512 |
+
465, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/
|
513 |
+
N19-1045. URL https://www.aclweb.org/anthology/N19-1045.
|
514 |
+
|
515 |
+
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings, 2013a. URL https://openreview.net/
|
516 |
+
forum?id=idpCdOWtqXd60.
|
517 |
+
|
518 |
+
Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. Exploiting similarities among languages for machine translation. *CoRR*, abs/1309.4168, 2013b. URL http://arxiv.org/abs/1309.4168.
|
519 |
+
|
520 |
+
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS'13, pp. 3111–3119, USA, 2013c. Curran Associates Inc. URL http://dl.acm.org/citation.cfm?id=2999792.2999959.
|
521 |
+
|
522 |
+
Tasnim Mohiuddin, M Saiful Bari, and Shafiq Joty. LNMap: Departures from isomorphic assumption in bilingual lexicon induction through non-linear mapping in latent space. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2712–2723, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.215. URL
|
523 |
+
https://aclanthology.org/2020.emnlp-main.215.
|
524 |
+
|
525 |
+
Ndapa Nakashole. NORMA: Neighborhood sensitive maps for multilingual word embeddings. In *Proceedings* of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 512–522, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1047.
|
526 |
+
|
527 |
+
URL https://www.aclweb.org/anthology/D18-1047.
|
528 |
+
|
529 |
+
Ndapa Nakashole and Raphael Flauger. Characterizing departures from linearity in word translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 221–227, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi:
|
530 |
+
10.18653/v1/P18-2036. URL https://www.aclweb.org/anthology/P18-2036.
|
531 |
+
|
532 |
+
J. A. Nelder and R. Mead. A Simplex Method for Function Minimization. *The Computer Journal*, 7(4):308–
|
533 |
+
313, 01 1965. ISSN 0010-4620. doi: 10.1093/comjnl/7.4.308. URL https://doi.org/10.1093/comjnl/
|
534 |
+
7.4.308.
|
535 |
+
|
536 |
+
Tan Ngoc Le and Fatiha Sadat. Revitalization of indigenous languages through pre-processing and neural machine translation: The case of Inuktitut. In *Proceedings of the 28th International Conference on Computational Linguistics*, pp. 4661–4666, Barcelona, Spain (Online), December 2020. International Committee on Computational Linguistics. doi: 10.18653/v1/2020.coling-main.410. URL
|
537 |
+
https://aclanthology.org/2020.coling-main.410.
|
538 |
+
|
539 |
+
Aitor Ormazabal, Mikel Artetxe, Aitor Soroa, Gorka Labaka, and Eneko Agirre. Beyond offline mapping:
|
540 |
+
Learning cross-lingual word embeddings through context anchoring. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 6479–6489, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.506. URL https://aclanthology.org/ 2021.acl-long.506.
|
541 |
+
|
542 |
+
Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R. Gormley, and Graham Neubig. Bilingual lexicon induction with semi-supervision in non-isometric embedding spaces. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 184–193, Florence, Italy, July 2019.
|
543 |
+
|
544 |
+
Association for Computational Linguistics. doi: 10.18653/v1/P19-1018. URL https://www.aclweb.org/
|
545 |
+
anthology/P19-1018.
|
546 |
+
|
547 |
+
Xutan Peng, Chenghua Lin, and Mark Stevenson. Cross-lingual word embedding refinement by ℓ1 norm optimisation. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pp. 2690–2701, Online, June 2021a. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.214. URL https:
|
548 |
+
//aclanthology.org/2021.naacl-main.214.
|
549 |
+
|
550 |
+
Xutan Peng, Yi Zheng, Chenghua Lin, and Advaith Siddharthan. Summarising historical text in modern languages. In *Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume*, pp. 3123–3142, Online, April 2021b. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/2021.eacl-main.273.
|
551 |
+
|
552 |
+
Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing
|
553 |
+
(EMNLP), pp. 1532–1543, Doha, Qatar, October 2014. Association for Computational Linguistics. doi:
|
554 |
+
10.3115/v1/D14-1162. URL https://www.aclweb.org/anthology/D14-1162.
|
555 |
+
|
556 |
+
Henri Prade and Gilles Richard. Analogical proportions: Why they are useful in ai. In Zhi-Hua Zhou
|
557 |
+
(ed.), *Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21*, pp. 4568–4576. International Joint Conferences on Artificial Intelligence Organization, 8 2021. doi: 10.24963/
|
558 |
+
ijcai.2021/621. URL https://doi.org/10.24963/ijcai.2021/621. Survey Track.
|
559 |
+
|
560 |
+
Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. Null it out: Guarding protected attributes by iterative nullspace projection. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pp. 7237–7256, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.647. URL https://aclanthology.org/2020.acl-main.647.
|
561 |
+
|
562 |
+
Anna Rogers, Aleksandr Drozd, and Bofang Li. The (too many) problems of analogical reasoning with word vectors. In *Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM*
|
563 |
+
2017), pp. 135–148, Vancouver, Canada, August 2017. Association for Computational Linguistics. doi:
|
564 |
+
10.18653/v1/S17-1017. URL https://aclanthology.org/S17-1017.
|
565 |
+
|
566 |
+
Sebastian Ruder, Ivan Vulić, and Anders Søgaard. A survey of cross-lingual word embedding models. Journal of Artificial Intelligence Research, 65(1):569–630, May 2019. ISSN 1076-9757. doi: 10.1613/jair.1.11640.
|
567 |
+
|
568 |
+
URL https://doi.org/10.1613/jair.1.11640.
|
569 |
+
|
570 |
+
Anders Søgaard, Sebastian Ruder, and Ivan Vulić. On the limitations of unsupervised bilingual dictionary induction. In *Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics*
|
571 |
+
(Volume 1: Long Papers), pp. 778–788, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1072. URL https://www.aclweb.org/anthology/P18-1072.
|
572 |
+
|
573 |
+
Jimin Sun, Hwijeen Ahn, Chan Young Park, Yulia Tsvetkov, and David R. Mortensen. Cross-cultural similarity features for cross-lingual transfer learning of pragmatically motivated tasks. In *Proceedings of* the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pp. 2403–2414, Online, April 2021. Association for Computational Linguistics. doi: 10.18653/v1/
|
574 |
+
2021.eacl-main.204. URL https://aclanthology.org/2021.eacl-main.204.
|
575 |
+
|
576 |
+
Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. Mitigating gender bias in natural language processing: Literature review. In *Proceedings of the 57th Annual Meeting of the Association for Computational* Linguistics, pp. 1630–1640, Florence, Italy, July 2019. Association for Computational Linguistics. doi:
|
577 |
+
10.18653/v1/P19-1159. URL https://aclanthology.org/P19-1159.
|
578 |
+
|
579 |
+
Matej Ulčar, Kristiina Vaik, Jessica Lindström, Milda Dailid˙enait˙e, and Marko Robnik-Šikonja. Multilingual culture-independent word analogy datasets. In *Proceedings of the 12th Language Resources and Evaluation* Conference, pp. 4074–4080, Marseille, France, May 2020. European Language Resources Association. ISBN
|
580 |
+
979-10-95546-34-4. URL https://aclanthology.org/2020.lrec-1.501.
|
581 |
+
|
582 |
+
Ivan Vulić, Sebastian Ruder, and Anders Søgaard. Are all good word vector spaces isomorphic? In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pp.
|
583 |
+
|
584 |
+
3178–3192, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.
|
585 |
+
|
586 |
+
emnlp-main.257. URL https://aclanthology.org/2020.emnlp-main.257.
|
587 |
+
|
588 |
+
Haozhou Wang, James Henderson, and Paola Merlo. Multi-adversarial learning for cross-lingual word embeddings. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 463–472, Online, June 2021a. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.39. URL https://aclanthology.org/
|
589 |
+
2021.naacl-main.39.
|
590 |
+
|
591 |
+
Meihong Wang, Linling Qiu, and Xiaoli Wang. A survey on knowledge graph embeddings for link prediction.
|
592 |
+
|
593 |
+
Symmetry, 13(3), 2021b. ISSN 2073-8994. doi: 10.3390/sym13030485. URL https://www.mdpi.com/ 2073-8994/13/3/485.
|
594 |
+
|
595 |
+
Zirui Wang, Jiateng Xie, Ruochen Xu, Yiming Yang, Graham Neubig, and Jaime G. Carbonell. Cross-lingual alignment vs joint training: A comparative study and A simple unified framework. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=S1l-C0NtwS.
|
596 |
+
|
597 |
+
Svante Wold, Kim Esbensen, and Paul Geladi. Principal Component Analysis. *Chemometrics and Intelligent Laboratory Systems*, 2(1):37 - 52, 1987. ISSN 0169-7439. doi: https://doi.org/10.1016/0169-7439(87)
|
598 |
+
80084-9. URL http://www.sciencedirect.com/science/article/pii/0169743987800849. Proceedings of the Multivariate Statistical Workshop for Geologists and Geochemists.
|
599 |
+
|
600 |
+
Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. Normalized word embedding and orthogonal transform for bilingual word translation. In *Proceedings of the 2015 Conference of the North American Chapter of* the Association for Computational Linguistics: Human Language Technologies, pp. 1006–1011, Denver, Colorado, May–June 2015. Association for Computational Linguistics. doi: 10.3115/v1/N15-1104. URL
|
601 |
+
https://www.aclweb.org/anthology/N15-1104.
|
602 |
+
|
603 |
+
Mozhi Zhang, Yoshinari Fujinuma, and Jordan Boyd-Graber. Exploiting cross-lingual subword similarities in low-resource document classification. In *Proceedings of the AAAI Conference on Artificial Intelligence*,
|
604 |
+
volume 34, pp. 9547–9554, 2020.
|
605 |
+
|
606 |
+
Yi Zhang, Jie Lu, Feng Liu, Qian Liu, Alan Porter, Hongshu Chen, and Guangquan Zhang. Does deep learning help topic extraction? a kernel k-means clustering method with word embedding. *Journal of* Informetrics, 12(4):1099–1117, 2018. ISSN 1751-1577. doi: https://doi.org/10.1016/j.joi.2018.09.004.
|
607 |
+
|
608 |
+
URL https://www.sciencedirect.com/science/article/pii/S1751157718300257.
|
609 |
+
|
610 |
+
Yucheng Zhou, Xiubo Geng, Tao Shen, Wenqiang Zhang, and Daxin Jiang. Improving zero-shot crosslingual transfer for multilingual question answering over knowledge graph. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 5822–5834, Online, June 2021. Association for Computational Linguistics.
|
611 |
+
|
612 |
+
doi: 10.18653/v1/2021.naacl-main.465. URL https://aclanthology.org/2021.naacl-main.465.
|
613 |
+
|
614 |
+
## A Question Formulations
|
615 |
+
|
616 |
+
For an analogy category with t word pairs, t2 four-item elements can be composed. An arbitrary element, α:β :: γ:θ, can yield eight analogy completion questions as follows:
|
617 |
+
|
618 |
+
$$\alpha{:}\beta\::\gamma{:}?$$
|
619 |
+
$\mathbf{a}=\mathbf{a}\cdot\mathbf{a}$.
|
620 |
+
$$\theta{:}\beta\,\,{\vdots}\,\,\gamma{:}?$$
|
621 |
+
$\beta$:$\alpha$ :: $\theta$:??
|
622 |
+
α:β :: γ:? β:α :: θ:? γ:α :: θ:? θ:β :: γ:?
|
623 |
+
|
624 |
+
# $\theta$:$\gamma$ :: $\beta$:??
|
625 |
+
α:γ :: β:? β:θ :: α:? γ:θ :: α:? θ:γ :: β:?
|
626 |
+
|
627 |
+
Hence, t2
|
628 |
+
× 8 unique questions can be generated.
|
629 |
+
|
630 |
+
## B Raw Data For Tab. 2
|
631 |
+
|
632 |
+
| xANLGG en-de en-es en-fr en-hi en-pl de-es de-fr de-hi de-pl es-fr | es-hi | es-pl | fr-hi fr-pl | hi-pl | | | | | | | | | | | | | | | | | | |
|
633 |
+
|----------------------------------------------------------------------|---------|---------|---------------|---------|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|
|
634 |
+
| CAP | .16 | .21 | .17 | .36 | .23 | .21 | .18 | .36 | .22 | .22 | .35 | .25 | .35 | .23 | .33 | | | | | | | |
|
635 |
+
| GNDR | .32 | .42 | .39 | .26 | .35 | .48 | .40 | .41 | .36 | .39 | .43 | .38 | .30 | .40 | .42 | | | | | | | |
|
636 |
+
| Wiki | NATL | .18 | .16 | .15 | .14 | .20 | .19 | .19 | .33 | .21 | .16 | .30 | .21 | .14 | .20 | .32 | | | | | | |
|
637 |
+
| G-PL | .22 | .23 | .22 | .36 | .26 | .25 | .23 | .35 | .26 | .25 | .38 | .27 | .37 | .26 | .38 | | | | | | | |
|
638 |
+
| CAP | .23 | .23 | .20 | .23 | .29 | .26 | .23 | .24 | .28 | .23 | .26 | .28 | .24 | .29 | .38 | | | | | | | |
|
639 |
+
| GNDR | .57 | .58 | .59 | .56 | .54 | .65 | .66 | .57 | .59 | .64 | .56 | .57 | .56 | .57 | .58 | | | | | | | |
|
640 |
+
| Crawl | NATL | .32 | .43 | .27 | .39 | .29 | .32 | .35 | .47 | .35 | .40 | .43 | .31 | .46 | .31 | .42 | | | | | | |
|
641 |
+
| G-PL | .35 | .24 | .33 | .48 | .29 | .33 | .37 | .44 | .42 | .33 | .47 | .33 | .48 | .42 | .51 | | | | | | | |
|
642 |
+
| CAP | .31 | .58 | .32 | .55 | .39 | .58 | .32 | .56 | .38 | .59 | .66 | .59 | .56 | .40 | .55 | | | | | | | |
|
643 |
+
| GNDR | .48 | .76 | .49 | .55 | .48 | .74 | .55 | .57 | .50 | .77 | .76 | .72 | .59 | .52 | .58 | | | | | | | |
|
644 |
+
| CoNLL NATL | .37 | .72 | .26 | .51 | .38 | .78 | .34 | .52 | .36 | .74 | .74 | .73 | .50 | .35 | .50 | | | | | | | |
|
645 |
+
| G-PL | .32 | .67 | .32 | .48 | .36 | .65 | .34 | .47 | .36 | .68 | .67 | .65 | .50 | .38 | .49 | | | | | | | |
|
646 |
+
| xANLGM | en | en | en | en | en | en | et | et | et | et | et | fi | fi | fi | fi | hr | hr | hr | lv | lv | ru | |
|
647 |
+
| et | fi | hr | lv | ru | sl | fi | hr | lv | ru | sl | hr | lv | ru | sl | lv | ru | sl | ru | sl | sl | | |
|
648 |
+
| Wiki | ANIM | .50 | .50 | .22 | .31 | .19 | .15 | .56 | .27 | .37 | .30 | .35 | .29 | .41 | .30 | .40 | .32 | .36 | .28 | .31 | .22 | .20 |
|
649 |
+
| G-PL | .25 | .22 | .37 | .37 | .28 | .33 | .24 | .31 | .29 | .28 | .26 | .30 | .29 | .26 | .27 | .33 | .32 | .30 | .33 | .28 | .28 | |
|
650 |
+
| Crawl | ANIM | .55 | .55 | .55 | .49 | .55 | .51 | .34 | .41 | .45 | .22 | .41 | .40 | .46 | .41 | .45 | .37 | .23 | .28 | .38 | .24 | .43 |
|
651 |
+
| G-PL | .28 | .43 | .47 | .43 | .45 | .40 | .30 | .45 | .37 | .43 | .37 | .46 | .40 | .44 | .43 | .42 | .50 | .54 | .39 | .35 | .43 | |
|
652 |
+
| CoNLL ANIM | .54 | .54 | .99 | .55 | .50 | .53 | .29 | .74 | .46 | .37 | .43 | .87 | .51 | .38 | .46 | .64 | .77 | .98 | .42 | .36 | .41 | |
|
653 |
+
| G-PL | .45 | .40 | .52 | .42 | .40 | .42 | .37 | .77 | .41 | .41 | .40 | .81 | .37 | .36 | .39 | .84 | .66 | .77 | .36 | .40 | .38 | |
|
654 |
+
|
655 |
+
| Wiki | Crawl | CoNLL | | | | | | | | | | | | | |
|
656 |
+
|--------|-------------------------------|---------|----------------|-----|----------------|-----|-----|-----|-----|-----|-----|-----|------|-------|-------|
|
657 |
+
| CAP | GNDR NATL G-PL | CAP | GNDR NATL G-PL | CAP | GNDR NATL G-PL | | | | | | | | | | |
|
658 |
+
| de | .68 | .25 | .21 | .23 | .47 | .48 | .79 | .77 | .65 | .43 | .41 | .55 | | | |
|
659 |
+
| en | .94 | .33 | .94 | .58 | .57 | .67 | .76 | .94 | .87 | .57 | .79 | .61 | | | |
|
660 |
+
| es | .45 | .13 | .35 | .13 | .40 | .57 | .68 | .87 | .13 | .07 | .07 | .17 | | | |
|
661 |
+
| fr | .92 | .27 | .76 | .13 | .65 | .50 | .85 | .87 | .48 | .14 | .24 | .35 | | | |
|
662 |
+
| hi | .29 | .30 | .42 | .07 | .58 | .59 | .59 | .32 | .32 | .37 | .31 | .16 | | | |
|
663 |
+
| pl | .16 | .21 | .26 | .10 | .29 | .55 | .82 | .84 | .45 | .45 | .38 | .52 | Wiki | Crawl | CoNLL |
|
664 |
+
| | ANIM G-PL ANIM G-PL ANIM G-PL | | | | | | | | | | | | | | |
|
665 |
+
| | en | .48 | .65 | .29 | .87 | .36 | .58 | | | | | | | | |
|
666 |
+
| | et | .12 | .50 | .52 | 1.00 | .21 | .48 | | | | | | | | |
|
667 |
+
| | fi | .06 | .65 | .48 | .87 | .42 | .54 | | | | | | | | |
|
668 |
+
| | hr | .17 | .20 | .50 | .68 | .07 | .11 | | | | | | | | |
|
669 |
+
| | lv | .19 | .10 | .39 | .84 | .27 | .23 | | | | | | | | |
|
670 |
+
| | ru | .36 | .40 | .61 | .87 | .42 | .55 | | | | | | | | |
|
671 |
+
| | sl | .42 | .23 | .39 | .81 | .12 | .39 | | | | | | | | |
|
672 |
+
|
673 |
+
Table 4: Raw SLMP results (the negative sign is omitted for brevity).
|
674 |
+
|
675 |
+
Table 5: Raw monolingual LRCos results (left:xANLGG; right: xANLGM).
|
8HuyXvbvqX/8HuyXvbvqX_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 20,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 0,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 0,
|
10 |
+
"ocr_engine": "none"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 20,
|
14 |
+
"code": 0,
|
15 |
+
"table": 4,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 33,
|
18 |
+
"unsuccessful_ocr": 3,
|
19 |
+
"equations": 36
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|
8HuyXvbvqX/9_image_0.png
ADDED
Git LFS Details
|