Papadopoulos, Dimitris commited on
Commit
35af852
1 Parent(s): 89c7a97

Added model weights, vocabs, readme.

Browse files
Files changed (7) hide show
  1. README.md +81 -0
  2. config.json +37 -0
  3. merges.txt +0 -0
  4. pytorch_model.bin +3 -0
  5. tokenizer_config.json +8 -0
  6. vocab-src.json +0 -0
  7. vocab-tgt.json +0 -0
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - el
5
+ tags:
6
+ - translation
7
+
8
+ widget:
9
+ - text: "Δεν υπάρχει τίποτε πιο άνισο από την ίση μεταχείριση των ανίσων."
10
+ license: apache-2.0
11
+ metrics:
12
+ - bleu
13
+ ---
14
+
15
+
16
+ ## Greek to English NMT (lower-case output)
17
+ ## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
18
+
19
+ * source languages: el
20
+ * target languages: en
21
+ * licence: apache-2.0
22
+ * dataset: Opus, CCmatrix
23
+ * model: transformer(fairseq)
24
+ * pre-processing: tokenization + BPE segmentation
25
+ * metrics: bleu, chrf
26
+ * output: lowercase only, for mixed-cased model use this: https://huggingface.co/lighteternal/SSE-TUC-mt-el-en-cased
27
+
28
+
29
+ ### Model description
30
+
31
+ Trained using the Fairseq framework, transformer_iwslt_de_en architecture.\
32
+ BPE segmentation (10k codes).\
33
+ Lower-case model.
34
+
35
+ ### How to use
36
+
37
+ ```
38
+ from transformers import FSMTTokenizer, FSMTForConditionalGeneration
39
+
40
+ mname = " <your_downloaded_model_folderpath_here> "
41
+
42
+ tokenizer = FSMTTokenizer.from_pretrained(mname)
43
+ model = FSMTForConditionalGeneration.from_pretrained(mname)
44
+
45
+ text = "Δεν υπάρχει τίποτε πιο άνισο από την ίση μεταχείριση των ανίσων."
46
+
47
+ encoded = tokenizer.encode(text, return_tensors='pt')
48
+
49
+ outputs = model.generate(encoded, num_beams=5, num_return_sequences=5, early_stopping=True)
50
+ for i, output in enumerate(outputs):
51
+ i += 1
52
+ print(f"{i}: {output.tolist()}")
53
+
54
+ decoded = tokenizer.decode(output, skip_special_tokens=True)
55
+ print(f"{i}: {decoded}")
56
+ ```
57
+
58
+
59
+ ## Training data
60
+
61
+ Consolidated corpus from Opus and CC-Matrix (~6.6GB in total)
62
+
63
+
64
+ ## Eval results
65
+
66
+
67
+ Results on Tatoeba testset (EL-EN):
68
+
69
+ | BLEU | chrF |
70
+ | ------ | ------ |
71
+ | 79.3 | 0.795 |
72
+
73
+
74
+ Results on XNLI parallel (EL-EN):
75
+
76
+ | BLEU | chrF |
77
+ | ------ | ------ |
78
+ | 66.2 | 0.623 |
79
+
80
+ ### BibTeX entry and citation info
81
+ TODO
config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "FSMTForConditionalGeneration"
4
+ ],
5
+ "model_type": "fsmt",
6
+ "activation_dropout": 0.0,
7
+ "activation_function": "relu",
8
+ "attention_dropout": 0.0,
9
+ "d_model": 512,
10
+ "dropout": 0.3,
11
+ "init_std": 0.02,
12
+ "max_position_embeddings": 1024,
13
+ "num_hidden_layers": 6,
14
+ "src_vocab_size": 12896,
15
+ "tgt_vocab_size": 9936,
16
+ "langs": [
17
+ "el",
18
+ "en"
19
+ ],
20
+ "encoder_attention_heads": 4,
21
+ "encoder_ffn_dim": 1024,
22
+ "encoder_layerdrop": 0,
23
+ "encoder_layers": 6,
24
+ "decoder_attention_heads": 4,
25
+ "decoder_ffn_dim": 1024,
26
+ "decoder_layerdrop": 0,
27
+ "decoder_layers": 6,
28
+ "bos_token_id": 0,
29
+ "pad_token_id": 1,
30
+ "eos_token_id": 2,
31
+ "is_encoder_decoder": true,
32
+ "scale_embedding": true,
33
+ "tie_word_embeddings": false,
34
+ "num_beams": 5,
35
+ "early_stopping": false,
36
+ "length_penalty": 1.0
37
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68f18a2904840e82d003ee7132cbb5de54147c6dbbca1b50f64a8d7051c3b898
3
+ size 172976478
tokenizer_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "langs": [
3
+ "el",
4
+ "en"
5
+ ],
6
+ "model_max_length": 1024,
7
+ "do_lower_case": true
8
+ }
vocab-src.json ADDED
The diff for this file is too large to render. See raw diff
 
vocab-tgt.json ADDED
The diff for this file is too large to render. See raw diff