ruanchaves commited on
Commit
032cf39
β€’
1 Parent(s): 1fad7af

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -3
README.md CHANGED
@@ -1,3 +1,91 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - text-classification
4
+ - zero-shot-classification
5
+ - question-answering
6
+ - sentence-similarity
7
+ language:
8
+ - pt
9
+ - ca
10
+ - gl
11
+ - en
12
+ size_categories:
13
+ - 100K<n<1M
14
+ ---
15
+
16
+ # Natural Portuguese Language Benchmark (Napolab)
17
+
18
+ <p align="center">
19
+ <img width="300" height="300" src="https://raw.githubusercontent.com/ruanchaves/napolab/main/images/ideogram_ai_logo.png">
20
+ </p>
21
+
22
+ The [**Napolab**](https://huggingface.co/datasets/ruanchaves/napolab) is your go-to collection of Portuguese datasets with the following characteristics:
23
+
24
+ * 🌿 **Natural**: As much as possible, datasets consist of natural Portuguese text or professionally translated text.
25
+ * βœ… **Reliable**: Metrics correlate reliably with human judgments (accuracy, F1 score, Pearson correlation, etc.).
26
+ * 🌐 **Public**: Every dataset is available through a public link.
27
+ * πŸ‘©β€πŸ”§ **Human**: Expert human annotations only. No automatic or unreliable annotations.
28
+ * πŸŽ“ **General**: No domain-specific knowledge or advanced preparation is needed to solve dataset tasks.
29
+
30
+ [Napolab](https://huggingface.co/datasets/ruanchaves/napolab) currently includes the following datasets:
31
+
32
+ | | | |
33
+ | :---: | :---: | :---: |
34
+ |[assin](https://huggingface.co/datasets/assin) | [assin2](https://huggingface.co/datasets/assin2) | [rerelem](https://huggingface.co/datasets/ruanchaves/rerelem)|
35
+ |[hatebr](https://huggingface.co/datasets/ruanchaves/hatebr)| [reli-sa](https://huggingface.co/datasets/ruanchaves/reli-sa) | [faquad-nli](https://huggingface.co/datasets/ruanchaves/faquad-nli) |
36
+ |[porsimplessent](https://huggingface.co/datasets/ruanchaves/porsimplessent) | | |
37
+
38
+ **πŸ’‘ Contribute**: We're open to expanding Napolab! Suggest additions in the issues. For more information, read our [CONTRIBUTING.md](CONTRIBUTING.md).
39
+
40
+ 🌍 For broader accessibility, all datasets have translations in **Catalan, English, Galician and Spanish** using the `facebook/nllb-200-1.3B model` via [Easy-Translate](https://github.com/ikergarcia1996/Easy-Translate).
41
+
42
+ ## πŸ“Š Napolab for Experiments with Large Language Models (LLMs)
43
+
44
+ A new format of Napolab specifically designed for researchers experimenting with Large Language Models (LLMs) is now available. This format includes two main fields:
45
+
46
+ * **Prompt**: The input prompt to be fed into the LLM.
47
+ * **Answer**: The expected classification output label from the LLM, which is always a number between 0 and 5.
48
+
49
+ The dataset in this format can be accessed at [https://huggingface.co/datasets/ruanchaves/napolab](https://huggingface.co/datasets/ruanchaves/napolab). If you've evaluated recent LLMs on this benchmark, please let us know! We'd love to hear about it.
50
+
51
+ ## Leaderboards
52
+
53
+ The [Open PT LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard) incorporates datasets from Napolab.
54
+
55
+ ## πŸ€– Models
56
+
57
+ We've made several models, fine-tuned on this benchmark, available on Hugging Face Hub:
58
+
59
+ | Datasets | mDeBERTa v3 | BERT Large | BERT Base |
60
+ |:----------------------------:|:--------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|
61
+ | **ASSIN 2 - STS** | [Link](https://huggingface.co/ruanchaves/mdeberta-v3-base-assin2-similarity) | [Link](https://huggingface.co/ruanchaves/bert-large-portuguese-cased-assin2-similarity) | [Link](https://huggingface.co/ruanchaves/bert-base-portuguese-cased-assin2-similarity) |
62
+ | **ASSIN 2 - RTE** | [Link](https://huggingface.co/ruanchaves/mdeberta-v3-base-assin2-entailment) | [Link](https://huggingface.co/ruanchaves/bert-large-portuguese-cased-assin2-entailment) | [Link](https://huggingface.co/ruanchaves/bert-base-portuguese-cased-assin2-entailment) |
63
+ | **ASSIN - STS** | [Link](https://huggingface.co/ruanchaves/mdeberta-v3-base-assin-similarity) | [Link](https://huggingface.co/ruanchaves/bert-large-portuguese-cased-assin-similarity) | [Link](https://huggingface.co/ruanchaves/bert-base-portuguese-cased-assin-similarity) |
64
+ | **ASSIN - RTE** | [Link](https://huggingface.co/ruanchaves/mdeberta-v3-base-assin-entailment) | [Link](https://huggingface.co/ruanchaves/bert-large-portuguese-cased-assin-entailment) | [Link](https://huggingface.co/ruanchaves/bert-base-portuguese-cased-assin-entailment) |
65
+ | **HateBR** | [Link](https://huggingface.co/ruanchaves/mdeberta-v3-base-hatebr) | [Link](https://huggingface.co/ruanchaves/bert-large-portuguese-cased-hatebr) | [Link](https://huggingface.co/ruanchaves/bert-base-portuguese-cased-hatebr) |
66
+ | **FaQUaD-NLI** | [Link](https://huggingface.co/ruanchaves/mdeberta-v3-base-faquad-nli) | [Link](https://huggingface.co/ruanchaves/bert-large-portuguese-cased-faquad-nli) | [Link](https://huggingface.co/ruanchaves/bert-base-portuguese-cased-faquad-nli) |
67
+ | **PorSimplesSent** | [Link](https://huggingface.co/ruanchaves/mdeberta-v3-base-porsimplessent) | [Link](https://huggingface.co/ruanchaves/bert-large-portuguese-cased-porsimplessent) | [Link](https://huggingface.co/ruanchaves/bert-base-portuguese-cased-porsimplessent) |
68
+
69
+
70
+ For model fine-tuning details and benchmark results, visit [EVALUATION.md](EVALUATION.md).
71
+
72
+ ## Citation
73
+
74
+ Our research is ongoing, and we are currently working on describing our experiments in a paper, which will be published soon. In the meanwhile, if you would like to cite our work or models before the publication of the paper, please use the following BibTeX citation for this repository:
75
+
76
+ ```
77
+ @software{Chaves_Rodrigues_napolab_2023,
78
+ author = {Chaves Rodrigues, Ruan and Tanti, Marc and Agerri, Rodrigo},
79
+ doi = {10.5281/zenodo.7781848},
80
+ month = {3},
81
+ title = {{Natural Portuguese Language Benchmark (Napolab)}},
82
+ url = {https://github.com/ruanchaves/napolab},
83
+ version = {1.0.0},
84
+ year = {2023}
85
+ }
86
+ ```
87
+
88
+ ## Disclaimer
89
+
90
+ The HateBR dataset, including all its components, is provided strictly for academic and research purposes. The use of the HateBR dataset for any commercial or non-academic purpose is expressly prohibited without the prior written consent of [SINCH](https://www.sinch.com/).
91
+