RicardoRei
commited on
Commit
•
e2f7998
1
Parent(s):
d4918ff
UniTE MUP checkpoint
Browse files- README.md +163 -0
- checkpoints/model.ckpt +3 -0
- hparams.yaml +30 -0
README.md
CHANGED
@@ -1,3 +1,166 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
pipeline_tag: translation
|
3 |
+
language:
|
4 |
+
- multilingual
|
5 |
+
- af
|
6 |
+
- am
|
7 |
+
- ar
|
8 |
+
- as
|
9 |
+
- az
|
10 |
+
- be
|
11 |
+
- bg
|
12 |
+
- bn
|
13 |
+
- br
|
14 |
+
- bs
|
15 |
+
- ca
|
16 |
+
- cs
|
17 |
+
- cy
|
18 |
+
- da
|
19 |
+
- de
|
20 |
+
- el
|
21 |
+
- en
|
22 |
+
- eo
|
23 |
+
- es
|
24 |
+
- et
|
25 |
+
- eu
|
26 |
+
- fa
|
27 |
+
- fi
|
28 |
+
- fr
|
29 |
+
- fy
|
30 |
+
- ga
|
31 |
+
- gd
|
32 |
+
- gl
|
33 |
+
- gu
|
34 |
+
- ha
|
35 |
+
- he
|
36 |
+
- hi
|
37 |
+
- hr
|
38 |
+
- hu
|
39 |
+
- hy
|
40 |
+
- id
|
41 |
+
- is
|
42 |
+
- it
|
43 |
+
- ja
|
44 |
+
- jv
|
45 |
+
- ka
|
46 |
+
- kk
|
47 |
+
- km
|
48 |
+
- kn
|
49 |
+
- ko
|
50 |
+
- ku
|
51 |
+
- ky
|
52 |
+
- la
|
53 |
+
- lo
|
54 |
+
- lt
|
55 |
+
- lv
|
56 |
+
- mg
|
57 |
+
- mk
|
58 |
+
- ml
|
59 |
+
- mn
|
60 |
+
- mr
|
61 |
+
- ms
|
62 |
+
- my
|
63 |
+
- ne
|
64 |
+
- nl
|
65 |
+
- 'no'
|
66 |
+
- om
|
67 |
+
- or
|
68 |
+
- pa
|
69 |
+
- pl
|
70 |
+
- ps
|
71 |
+
- pt
|
72 |
+
- ro
|
73 |
+
- ru
|
74 |
+
- sa
|
75 |
+
- sd
|
76 |
+
- si
|
77 |
+
- sk
|
78 |
+
- sl
|
79 |
+
- so
|
80 |
+
- sq
|
81 |
+
- sr
|
82 |
+
- su
|
83 |
+
- sv
|
84 |
+
- sw
|
85 |
+
- ta
|
86 |
+
- te
|
87 |
+
- th
|
88 |
+
- tl
|
89 |
+
- tr
|
90 |
+
- ug
|
91 |
+
- uk
|
92 |
+
- ur
|
93 |
+
- uz
|
94 |
+
- vi
|
95 |
+
- xh
|
96 |
+
- yi
|
97 |
+
- zh
|
98 |
+
|
99 |
license: apache-2.0
|
100 |
---
|
101 |
+
|
102 |
+
This model was developed by the NLP2CT Lab at the University of Macau and Alibaba Group, and all credits should be attributed to these groups. Since it was developed using the COMET codebase, we adapted the code to run these models within COMET."
|
103 |
+
|
104 |
+
# Paper
|
105 |
+
|
106 |
+
- [UniTE: Unified Translation Evaluation](https://aclanthology.org/2022.acl-long.558/) (Wan et al., ACL 2022)
|
107 |
+
|
108 |
+
# Original Code
|
109 |
+
|
110 |
+
- [UniTE](https://github.com/NLP2CT/UniTE)
|
111 |
+
|
112 |
+
# License
|
113 |
+
|
114 |
+
Apache 2.0
|
115 |
+
|
116 |
+
# Usage (unbabel-comet)
|
117 |
+
|
118 |
+
Using this model requires unbabel-comet to be installed:
|
119 |
+
|
120 |
+
```bash
|
121 |
+
pip install --upgrade pip # ensures that pip is current
|
122 |
+
pip install unbabel-comet
|
123 |
+
```
|
124 |
+
|
125 |
+
Then you can use it through comet CLI:
|
126 |
+
|
127 |
+
```bash
|
128 |
+
comet-score -s {source-inputs}.txt -t {translation-outputs}.txt -r {references}.txt --model Unbabel/unite-mup
|
129 |
+
```
|
130 |
+
|
131 |
+
Or using Python:
|
132 |
+
|
133 |
+
```python
|
134 |
+
from comet import download_model, load_from_checkpoint
|
135 |
+
|
136 |
+
model_path = download_model("Unbabel/unite-mup")
|
137 |
+
model = load_from_checkpoint(model_path)
|
138 |
+
data = [
|
139 |
+
{
|
140 |
+
"src": "Dem Feuer konnte Einhalt geboten werden",
|
141 |
+
"mt": "The fire could be stopped",
|
142 |
+
"ref": "They were able to control the fire."
|
143 |
+
},
|
144 |
+
{
|
145 |
+
"src": "Schulen und Kindergärten wurden eröffnet.",
|
146 |
+
"mt": "Schools and kindergartens were open",
|
147 |
+
"ref": "Schools and kindergartens opened"
|
148 |
+
}
|
149 |
+
]
|
150 |
+
model_output = model.predict(data, batch_size=8, gpus=1)
|
151 |
+
print (model_output)
|
152 |
+
```
|
153 |
+
|
154 |
+
# Intended uses
|
155 |
+
|
156 |
+
Our model is intented to be used for **MT evaluation**.
|
157 |
+
|
158 |
+
Given a a triplet with (source sentence, translation, reference translation) outputs a single score between 0 and 1 where 1 represents a perfect translation.
|
159 |
+
|
160 |
+
# Languages Covered:
|
161 |
+
|
162 |
+
This model builds on top of XLM-R which cover the following languages:
|
163 |
+
|
164 |
+
Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.
|
165 |
+
|
166 |
+
Thus, results for language pairs containing uncovered languages are unreliable!
|
checkpoints/model.ckpt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:009d25e6e8b3317bef1bbab5185881d2eb84ba9e98abf8f8f0509bc3f3b2aae5
|
3 |
+
size 2260734321
|
hparams.yaml
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
activations: Tanh
|
2 |
+
batch_size: 4
|
3 |
+
class_identifier: unified_metric
|
4 |
+
dropout: 0.1
|
5 |
+
encoder_learning_rate: 5.0e-06
|
6 |
+
encoder_model: XLM-RoBERTa
|
7 |
+
final_activation: null
|
8 |
+
hidden_sizes:
|
9 |
+
- 3072
|
10 |
+
- 1024
|
11 |
+
input_segments:
|
12 |
+
- src
|
13 |
+
- mt
|
14 |
+
- ref
|
15 |
+
keep_embeddings_frozen: true
|
16 |
+
layer: mix
|
17 |
+
layerwise_decay: 0.95
|
18 |
+
learning_rate: 1.5e-05
|
19 |
+
load_weights_from_checkpoint: null
|
20 |
+
nr_frozen_epochs: 0.3
|
21 |
+
optimizer: AdamW
|
22 |
+
pool: cls
|
23 |
+
pretrained_model: xlm-roberta-large
|
24 |
+
train_data: data/1719-da.csv
|
25 |
+
validation_data:
|
26 |
+
- data/qad-ende-newstest2020.csv
|
27 |
+
- data/qad-enru-newstest2020.csv
|
28 |
+
- data/wmt-ende-newstest2021.csv
|
29 |
+
- data/wmt-zhen-newstest2021.csv
|
30 |
+
- data/wmt-enru-newstest2021.csv
|