ruanchaves
commited on
Commit
•
3817275
1
Parent(s):
cceba9b
Update README.md
Browse files
README.md
CHANGED
@@ -30,7 +30,6 @@ size_categories:
|
|
30 |
|
31 |
The [**Napolab**](https://github.com/ruanchaves/napolab) is your go-to collection of Portuguese datasets for the evaluation of Large Language Models.
|
32 |
|
33 |
-
|
34 |
## 📊 Napolab for Large Language Models (LLMs)
|
35 |
|
36 |
A format of Napolab specifically designed for researchers experimenting with Large Language Models (LLMs) is now available. This format includes two main fields:
|
@@ -40,6 +39,10 @@ A format of Napolab specifically designed for researchers experimenting with Lar
|
|
40 |
|
41 |
The dataset in this format can be accessed at [https://huggingface.co/datasets/ruanchaves/napolab](https://huggingface.co/datasets/ruanchaves/napolab). If you’ve used Napolab for LLM evaluations, please share your findings with us!
|
42 |
|
|
|
|
|
|
|
|
|
43 |
## Guidelines
|
44 |
|
45 |
Napolab adopts the following guidelines for the inclusion of datasets:
|
@@ -62,10 +65,6 @@ Napolab adopts the following guidelines for the inclusion of datasets:
|
|
62 |
|
63 |
🌍 For broader accessibility, all datasets have translations in **Catalan, English, Galician and Spanish** using the `facebook/nllb-200-1.3B model` via [Easy-Translate](https://github.com/ikergarcia1996/Easy-Translate).
|
64 |
|
65 |
-
## Leaderboards
|
66 |
-
|
67 |
-
The [Open PT LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard) incorporates datasets from Napolab.
|
68 |
-
|
69 |
## 🤖 Models
|
70 |
|
71 |
We've made several models, fine-tuned on this benchmark, available on Hugging Face Hub:
|
@@ -83,6 +82,27 @@ We've made several models, fine-tuned on this benchmark, available on Hugging Fa
|
|
83 |
|
84 |
For model fine-tuning details and benchmark results, visit [EVALUATION.md](EVALUATION.md).
|
85 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
86 |
## Citation
|
87 |
|
88 |
Our research is ongoing, and we are currently working on describing our experiments in a paper, which will be published soon. In the meanwhile, if you would like to cite our work or models before the publication of the paper, please use the following BibTeX citation for this repository:
|
|
|
30 |
|
31 |
The [**Napolab**](https://github.com/ruanchaves/napolab) is your go-to collection of Portuguese datasets for the evaluation of Large Language Models.
|
32 |
|
|
|
33 |
## 📊 Napolab for Large Language Models (LLMs)
|
34 |
|
35 |
A format of Napolab specifically designed for researchers experimenting with Large Language Models (LLMs) is now available. This format includes two main fields:
|
|
|
39 |
|
40 |
The dataset in this format can be accessed at [https://huggingface.co/datasets/ruanchaves/napolab](https://huggingface.co/datasets/ruanchaves/napolab). If you’ve used Napolab for LLM evaluations, please share your findings with us!
|
41 |
|
42 |
+
## Leaderboards
|
43 |
+
|
44 |
+
The [Open PT LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard) incorporates datasets from Napolab.
|
45 |
+
|
46 |
## Guidelines
|
47 |
|
48 |
Napolab adopts the following guidelines for the inclusion of datasets:
|
|
|
65 |
|
66 |
🌍 For broader accessibility, all datasets have translations in **Catalan, English, Galician and Spanish** using the `facebook/nllb-200-1.3B model` via [Easy-Translate](https://github.com/ikergarcia1996/Easy-Translate).
|
67 |
|
|
|
|
|
|
|
|
|
68 |
## 🤖 Models
|
69 |
|
70 |
We've made several models, fine-tuned on this benchmark, available on Hugging Face Hub:
|
|
|
82 |
|
83 |
For model fine-tuning details and benchmark results, visit [EVALUATION.md](EVALUATION.md).
|
84 |
|
85 |
+
## Usage
|
86 |
+
|
87 |
+
To reproduce the Napolab benchmark available on the Hugging Face Hub locally, follow these steps:
|
88 |
+
|
89 |
+
1. Clone the repository and install the library:
|
90 |
+
|
91 |
+
```bash
|
92 |
+
git clone https://github.com/ruanchaves/napolab.git
|
93 |
+
cd napolab
|
94 |
+
pip install -e .
|
95 |
+
```
|
96 |
+
|
97 |
+
2. Generate the benchmark file:
|
98 |
+
|
99 |
+
```python
|
100 |
+
from napolab import export_napolab_benchmark, convert_to_completions_format
|
101 |
+
input_df = export_napolab_benchmark()
|
102 |
+
output_df = convert_to_completions_format(input_df)
|
103 |
+
output_df.reset_index().to_csv("test.csv", index=False)
|
104 |
+
```
|
105 |
+
|
106 |
## Citation
|
107 |
|
108 |
Our research is ongoing, and we are currently working on describing our experiments in a paper, which will be published soon. In the meanwhile, if you would like to cite our work or models before the publication of the paper, please use the following BibTeX citation for this repository:
|