julien-c HF staff commited on
Commit
8e656ca
1 Parent(s): 6fb6d69

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/joeddav/xlm-roberta-large-xnli/README.md

Files changed (1) hide show
  1. README.md +122 -0
README.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: multilingual
3
+ tags:
4
+ - text-classification
5
+ - pytorch
6
+ - tensorflow
7
+ datasets:
8
+ - multi_nli
9
+ - xnli
10
+ license: mit
11
+ pipeline_tag: zero-shot-classification
12
+ widget:
13
+ - text: "За кого вы голосуете в 2020 году?"
14
+ labels: "politique étrangère, Europe, élections, affaires, politique"
15
+ - text: "لمن تصوت في 2020؟"
16
+ labels: "السياسة الخارجية, أوروبا, الانتخابات, الأعمال, السياسة"
17
+ - text: "2020'de kime oy vereceksiniz?"
18
+ labels: "dış politika, Avrupa, seçimler, ticaret, siyaset"
19
+ ---
20
+
21
+ # xlm-roberta-large-xnli
22
+
23
+ ## Model Description
24
+
25
+ This model takes [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and fine-tunes it on a combination of NLI data in 15 languages. It is intended to be used for zero-shot text classification, such as with the Hugging Face [ZeroShotClassificationPipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline).
26
+
27
+ ## Intended Usage
28
+
29
+ This model is intended to be used for zero-shot text classification, especially in languages other than English. It is fine-tuned on XNLI, which is a multilingual NLI dataset. The model can therefore be used with any of the languages in the XNLI corpus:
30
+
31
+ - English
32
+ - French
33
+ - Spanish
34
+ - German
35
+ - Greek
36
+ - Bulgarian
37
+ - Russian
38
+ - Turkish
39
+ - Arabic
40
+ - Vietnamese
41
+ - Thai
42
+ - Chinese
43
+ - Hindi
44
+ - Swahili
45
+ - Urdu
46
+
47
+ Since the base model was pre-trained trained on 100 different languages, the
48
+ model has shown some effectiveness in languages beyond those listed above as
49
+ well. See the full list of pre-trained languages in appendix A of the
50
+ [XLM Roberata paper](https://arxiv.org/abs/1911.02116)
51
+
52
+ For English-only classification, it is recommended to use
53
+ [bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) or
54
+ [a distilled bart MNLI model](https://huggingface.co/models?filter=pipeline_tag%3Azero-shot-classification&search=valhalla).
55
+
56
+ #### With the zero-shot classification pipeline
57
+
58
+ The model can be loaded with the `zero-shot-classification` pipeline like so:
59
+
60
+ ```python
61
+ from transformers import pipeline
62
+ classifier = pipeline("zero-shot-classification",
63
+ model="joeddav/xlm-roberta-large-xnli")
64
+ ```
65
+
66
+ You can then classify in any of the above languages. You can even pass the labels in one language and the sequence to
67
+ classify in another:
68
+
69
+ ```python
70
+ # we will classify the Russian translation of, "Who are you voting for in 2020?"
71
+ sequence_to_classify = "За кого вы голосуете в 2020 году?"
72
+ # we can specify candidate labels in Russian or any other language above:
73
+ candidate_labels = ["Europe", "public health", "politics"]
74
+ classifier(sequence_to_classify, candidate_labels)
75
+ # {'labels': ['politics', 'Europe', 'public health'],
76
+ # 'scores': [0.9048484563827515, 0.05722189322113991, 0.03792969882488251],
77
+ # 'sequence': 'За кого вы голосуете в 2020 году?'}
78
+ ```
79
+
80
+ The default hypothesis template is the English, `This text is {}`. If you are working strictly within one language, it
81
+ may be worthwhile to translate this to the language you are working with:
82
+
83
+ ```python
84
+ sequence_to_classify = "¿A quién vas a votar en 2020?"
85
+ candidate_labels = ["Europa", "salud pública", "política"]
86
+ hypothesis_template = "Este ejemplo es {}."
87
+ classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template)
88
+ # {'labels': ['política', 'Europa', 'salud pública'],
89
+ # 'scores': [0.9109585881233215, 0.05954807624220848, 0.029493311420083046],
90
+ # 'sequence': '¿A quién vas a votar en 2020?'}
91
+ ```
92
+
93
+ #### With manual PyTorch
94
+
95
+ ```python
96
+ # pose sequence as a NLI premise and label as a hypothesis
97
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
98
+ nli_model = AutoModelForSequenceClassification.from_pretrained('joeddav/xlm-roberta-large-xnli')
99
+ tokenizer = AutoTokenizer.from_pretrained('joeddav/xlm-roberta-large-xnli')
100
+
101
+ premise = sequence
102
+ hypothesis = f'This example is {label}.'
103
+
104
+ # run through model pre-trained on MNLI
105
+ x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
106
+ truncation_strategy='only_first')
107
+ logits = nli_model(x.to(device))[0]
108
+
109
+ # we throw away "neutral" (dim 1) and take the probability of
110
+ # "entailment" (2) as the probability of the label being true
111
+ entail_contradiction_logits = logits[:,[0,2]]
112
+ probs = entail_contradiction_logits.softmax(dim=1)
113
+ prob_label_is_true = probs[:,1]
114
+ ```
115
+
116
+ ## Training
117
+
118
+ This model was pre-trained on set of 100 languages, as described in
119
+ [the original paper](https://arxiv.org/abs/1911.02116). It was then fine-tuned on the task of NLI on the concatenated
120
+ MNLI train set and the XNLI validation and test sets. Finally, it was trained for one additional epoch on only XNLI
121
+ data where the translations for the premise and hypothesis are shuffled such that the premise and hypothesis for
122
+ each example come from the same original English example but the premise and hypothesis are of different languages.