File size: 5,432 Bytes
bcb703b fc8ef3d 6dba82e fc8ef3d ca10a26 fc8ef3d 33a5d22 fc8ef3d 33a5d22 ca10a26 fc8ef3d a875d8c 6a7afc7 a875d8c 2b857fb a875d8c 6dba82e 33a5d22 bcb703b 33a5d22 fc8ef3d 4579ae5 fc8ef3d 33a5d22 fc8ef3d 9bb929e f746a0e fc8ef3d 33a5d22 fc8ef3d 33a5d22 fc8ef3d baa2e11 fc8ef3d 33a5d22 fc8ef3d 33a5d22 fc8ef3d 33a5d22 fc8ef3d 95b39b2 fc8ef3d 8f229a9 fc8ef3d 33a5d22 fc8ef3d 33a5d22 fc8ef3d be48d84 fc8ef3d be48d84 fc8ef3d 922df51 baa2e11 922df51 fc8ef3d 81b5afa fc8ef3d 33a5d22 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
---
license: apache-2.0
datasets:
- nicholasKluge/instruct-aira-dataset
language:
- pt
metrics:
- accuracy
library_name: transformers
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
pipeline_tag: text-generation
widget:
- text: "<|startofinstruction|>Você pode me explicar o que é Aprendizagem de Máquina?<|endofinstruction|>"
example_title: Aprendizagem de Máquina
- text: "<|startofinstruction|>Você sabe alguma coisa sobre Ética das Virtudes?<|endofinstruction|>"
example_title: Ética
- text: "<|startofinstruction|>Como eu posso fazer a minha namorada feliz?<|endofinstruction|>"
example_title: Conselho
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.2
top_k: 30
top_p: 0.3
max_new_tokens: 200
length_penalty: 0.3
early_stopping: true
co2_eq_emissions:
emissions: 0.35
source: CodeCarbon
training_type: fine-tuning
geographical_location: Singapore
hardware_used: NVIDIA A100-SXM4-40GB
---
# Aira-2-portuguese-124M
`Aira-2` is the second version of the Aira instruction-tuned series. `Aira-2-portuguese-124M` is an instruction-tuned model based on [GPT-2](https://huggingface.co/pierreguillou/gpt2-small-portuguese). The model was trained with a dataset composed of prompt, completions generated synthetically by prompting already-tuned models (ChatGPT, Llama, Open-Assistant, etc).
Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo-Portuguese).
## Details
- **Size:** 124,441,344 parameters
- **Dataset:** [Instruct-Aira Dataset](https://huggingface.co/datasets/nicholasKluge/instruct-aira-dataset)
- **Language:** Portuguese
- **Number of Epochs:** 5
- **Batch size:** 24
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e2, learning_rate = 5e-4, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Emissions:** 0.35 KgCO2 (Singapore)
- **Total Energy Consumption:** 0.73 kWh
This repository has the [source code](https://github.com/Nkluge-correa/Aira) used to train this model.
## Usage
Three special tokens are used to mark the user side of the interaction and the model's response:
`<|startofinstruction|>`O que é um modelo de linguagem?`<|endofinstruction|>`Um modelo de linguagem é uma distribuição de probabilidade sobre um vocabulário.`<|endofcompletion|>`
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/Aira-2-portuguese-124M')
aira = AutoModelForCausalLM.from_pretrained('nicholasKluge/Aira-2-portuguese-124M')
aira.eval()
aira.to(device)
question = input("Enter your question: ")
inputs = tokenizer(tokenizer.bos_token + question + tokenizer.sep_token,
add_special_tokens=False,
return_tensors="pt").to(device)
responses = aira.generate(**inputs, num_return_sequences=2)
print(f"Question: 👤 {question}\n")
for i, response in enumerate(responses):
print(f'Response {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
```
The model will output something like:
```markdown
>>> Question: 👤 Qual a capital do Brasil?
>>>Response 1: 🤖 A capital do Brasil é Brasília.
>>>Response 2: 🤖 A capital do Brasil é Brasília.
```
## Limitations
🤥 Generative models can perpetuate the generation of pseudo-informative content, that is, false information that may appear truthful.
🤬 In certain types of tasks, generative models can produce harmful and discriminatory content inspired by historical stereotypes.
## Evaluation
| Model (gpt2-portuguese) | Average | [ARC](https://arxiv.org/abs/1803.05457) | [TruthfulQA](https://arxiv.org/abs/2109.07958) | [ToxiGen](https://arxiv.org/abs/2203.09509) |
|---------------------------------------------------------------------------------------|-----------|-----------------------------------------|------------------------------------------------|---------------------------------------------|
| [Aira-2-portuguese-124M](https://huggingface.co/nicholasKluge/Aira-2-portuguese-124M) | **34.73** | **24.87** | 40.60 | None |
| gpt2-small-portuguese | 31.96 | 22.48 | **41.44** | None |
* Evaluations were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). The ToxiGen evaluation was not performed because the task is not available in Portuguese. Thanks to [Laiviet](https://github.com/laiviet/lm-evaluation-harness) for translating some of the tasks in the LM-Evaluation-Harness.
## Cite as 🤗
```latex
@misc{nicholas22aira,
doi = {10.5281/zenodo.6989727},
url = {https://huggingface.co/nicholasKluge/Aira-2-portuguese-124M},
author = {Nicholas Kluge Corrêa},
title = {Aira},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
}
```
## License
The `Aira-2-portuguese-124M` is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
|