File size: 3,026 Bytes
e303c00
186ce32
 
 
 
 
e303c00
186ce32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e303c00
186ce32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---

language:

- ca

license: apache-2.0

tags:

- "catalan"

- "textual entailment"

- "teca"

- "CaText"

- "Catalan Textual Corpus"

datasets:

- "projecte-aina/teca"

metrics:

- "accuracy"


model-index:
- name: roberta-base-ca-v2-cased-te
  results:
  - task: 
      type: text-classification  # Required. Example: automatic-speech-recognition
    dataset:
      type:   projecte-aina/teca
      name: TECA
    metrics:
      - name: Accuracy
        type: accuracy
        value: 0.8342
        
widget:

- text: "M'agrades. T'estimo." 

- text: "M'agrada el sol i la calor. A la Garrotxa plou molt."

- text: "El llibre va caure per la finestra. El llibre va sortir volant."

- text: "El meu aniversari és el 23 de maig. Faré anys a finals de maig."

---

# Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Textual Entailment.

The **roberta-base-ca-v2-cased-te** is a Textual Entailment (TE) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).

## Datasets
We used the TE dataset in Catalan called [TECA](https://huggingface.co/datasets/projecte-aina/teca) for training and evaluation.

## Evaluation and results
We evaluated the roberta-base-ca-cased-te on the TECA test set against standard multilingual and monolingual baselines:

| Model        | TECA (Accuracy) | 
| ------------|:----|
| roberta-base-ca-v2-cased-te | **83.14** |
| BERTa       | 79.26 |
| mBERT       | 74.63 |
| XLM-RoBERTa | 33.30 |

For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).

## Citing 
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
    title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
    author = "Armengol-Estap{\'e}, Jordi  and
      Carrino, Casimiro Pio  and
      Rodriguez-Penagos, Carlos  and
      de Gibert Bonet, Ona  and
      Armentano-Oller, Carme  and
      Gonzalez-Agirre, Aitor  and
      Melero, Maite  and
      Villegas, Marta",
    booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.findings-acl.437",
    doi = "10.18653/v1/2021.findings-acl.437",
    pages = "4933--4946",
}
```

### Funding
This work was funded by the [Catalan Government](https://politiquesdigitals.gencat.cat/en/inici/index.html) within the framework of the [AINA project.](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).