ixa-ehu
commited on
Commit
•
348bbe2
1
Parent(s):
435520c
roberta-eus-euscrawl-large-cased model upload
Browse files- README.md +41 -3
- config.json +27 -0
- pytorch_model.bin +3 -0
- sentencepiece.bpe.model +3 -0
- special_tokens_map.json +1 -0
- tokenizer_config.json +1 -0
README.md
CHANGED
@@ -1,3 +1,41 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: eu
|
3 |
+
license: cc-by-nc-4.0
|
4 |
+
tags:
|
5 |
+
- basque
|
6 |
+
- roberta
|
7 |
+
---
|
8 |
+
|
9 |
+
# Roberta-eus Euscrawl base cased
|
10 |
+
|
11 |
+
This is a RoBERTa model for Basque model presented in [Does corpus quality really matter for low-resource languages?](https://arxiv.org/abs/2203.08111). There are several models for Basque using the RoBERTa architecture, using different corpora:
|
12 |
+
|
13 |
+
- roberta-eus-euscrawl-base-cased: Basque RoBERTa model trained on Euscrawl, a corpus created using tailored crawling from Basque sites, and which is distributed under a CC-BY license. EusCrawl It contains 12,528k documents and 423M tokens.
|
14 |
+
- roberta-eus-euscrawl-large-cased: RoBERTa large trained on EusCrawl.
|
15 |
+
- roberta-eus-mC4-base-cased: Basque RoBERTa model trained on the Basque portion of mc4 dataset.
|
16 |
+
- roberta-eus-CC100-base-cased: Basque RoBERTa model trained on Basque portion of cc100 dataset.
|
17 |
+
|
18 |
+
The models have been tested on five different downstream tasks for Basque: Topic classification, Sentiment analysis, Stance detection, Named Entity Recognition (NER), and Question Answering (refer to the [paper](https://arxiv.org/abs/2203.08111) for more details). See summary of results below:
|
19 |
+
|
20 |
+
|
21 |
+
| Model | Topic class. | Sentiment | Stance det. | NER | QA | Average |
|
22 |
+
|----------------------------------|--------------|-----------|-------------|----------|----------|----------|
|
23 |
+
| roberta-eus-euscrawl-base-cased | 76.2 | 77.7 | 57.4 | 86.8 | 34.6 | 66.5 |
|
24 |
+
| roberta-eus-euscrawl-large-cased | **77.6** | 78.8 | 62.9 | **87.2** | **38.3** | **69.0** |
|
25 |
+
| roberta-eus-mC4-base-cased | 75.3 | **80.4** | 59.1 | 86.0 | 35.2 | 67.2 |
|
26 |
+
| roberta-eus-CC100-base-cased | 76.2 | 78.8 | **63.4** | 85.2 | 35.8 | 67.9 |
|
27 |
+
|
28 |
+
|
29 |
+
If you use any of these models, please cite the following paper:
|
30 |
+
|
31 |
+
```
|
32 |
+
@misc{artetxe2022euscrawl,
|
33 |
+
title={Does corpus quality really matter for low-resource languages?},
|
34 |
+
author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri,
|
35 |
+
Olatz Perez-de-Viñaspre, Aitor Soroa},
|
36 |
+
year={2022},
|
37 |
+
eprint={2203.08111},
|
38 |
+
archivePrefix={arXiv},
|
39 |
+
primaryClass={cs.CL}
|
40 |
+
}
|
41 |
+
```
|
config.json
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"RobertaForMaskedLM"
|
4 |
+
],
|
5 |
+
"tokenizer_class" : "XLMRobertaTokenizer",
|
6 |
+
"attention_probs_dropout_prob": 0.1,
|
7 |
+
"bos_token_id": 0,
|
8 |
+
"classifier_dropout": null,
|
9 |
+
"eos_token_id": 2,
|
10 |
+
"hidden_act": "gelu",
|
11 |
+
"hidden_dropout_prob": 0.1,
|
12 |
+
"hidden_size": 1024,
|
13 |
+
"initializer_range": 0.02,
|
14 |
+
"intermediate_size": 4096,
|
15 |
+
"layer_norm_eps": 1e-05,
|
16 |
+
"max_position_embeddings": 514,
|
17 |
+
"model_type": "roberta",
|
18 |
+
"num_attention_heads": 16,
|
19 |
+
"num_hidden_layers": 24,
|
20 |
+
"pad_token_id": 1,
|
21 |
+
"position_embedding_type": "absolute",
|
22 |
+
"torch_dtype": "float32",
|
23 |
+
"transformers_version": "4.15.0",
|
24 |
+
"type_vocab_size": 1,
|
25 |
+
"use_cache": true,
|
26 |
+
"vocab_size": 50005
|
27 |
+
}
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ec448e7b727cf46f21c734fcf4e3b56898cf1b20c5017e7f322a8d014ba4acdc
|
3 |
+
size 1420741035
|
sentencepiece.bpe.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8e160df24c5c69726dfc7099db8e610307ce0ab35d945070b59c3548c065f75d
|
3 |
+
size 1169424
|
special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true}}
|
tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"do_lower_case":false, "bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "cls_token": "<s>", "pad_token": "<pad>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "sp_model_kwargs": {}, "tokenizer_class": "XLMRobertaTokenizer"}
|