|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- nicholasKluge/instruct-aira-dataset |
|
language: |
|
- pt |
|
metrics: |
|
- accuracy |
|
library_name: transformers |
|
tags: |
|
- alignment |
|
- instruction tuned |
|
- text generation |
|
- conversation |
|
- assistant |
|
pipeline_tag: text-generation |
|
widget: |
|
- text: "<|startofinstruction|>Olá! Como você se chama?<|endofinstruction|>" |
|
example_title: Olá |
|
- text: "<|startofinstruction|>Você pode me explicar o que é Aprendizagem de Máquina?<|endofinstruction|>" |
|
example_title: Aprendizagem de Máquina |
|
- text: "<|startofinstruction|>Você sabe alguma coisa sobre Ética das Virtudes?<|endofinstruction|>" |
|
example_title: Ética |
|
- text: "<|startofinstruction|>Como eu posso fazer a minha namorada feliz?<|endofinstruction|>" |
|
example_title: Conselho |
|
inference: |
|
parameters: |
|
repetition_penalty: 1.2 |
|
temperature: 0.2 |
|
top_k: 30 |
|
top_p: 0.3 |
|
max_length: 200 |
|
length_penalty: 0.3 |
|
early_stopping: true |
|
co2_eq_emissions: |
|
emissions: 0.35 |
|
source: CodeCarbon |
|
training_type: fine-tuning |
|
geographical_location: Singapore |
|
hardware_used: NVIDIA A100-SXM4-40GB |
|
--- |
|
# Aira-2-portuguese-124M |
|
|
|
`Aira-2-portuguese-124M` is the second version of the Aira instruction-tuned series. Aira is an instruction-tuned GPT-style model based on [GPT-2](https://huggingface.co/pierreguillou/gpt2-small-portuguese). The model was trained with a dataset composed of prompt, completions generated synthetically by prompting already-tuned models (ChatGPT, Llama, Open-Assistant, etc). |
|
|
|
Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo-Portuguese). |
|
|
|
## Details |
|
|
|
- **Size:** 124,441,344 parameters |
|
- **Dataset:** [Instruct-Aira Dataset](https://huggingface.co/datasets/nicholasKluge/instruct-aira-dataset) |
|
- **Language:** Portuguese |
|
- **Number of Epochs:** 5 |
|
- **Batch size:** 24 |
|
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e2, learning_rate = 5e-4, epsilon = 1e-8) |
|
- **GPU:** 1 NVIDIA A100-SXM4-40GB |
|
- **Emissions:** 0.35 KgCO2 (Singapore) |
|
- **Total Energy Consumption:** 0.73 kWh |
|
|
|
This repository has the [notebook](AIRA_FineTuning.ipynb) used to train this model. |
|
|
|
## Usage |
|
|
|
Three special tokens are used to mark the user side of the interaction and the model's response: |
|
|
|
`<|startofinstruction|>`O que é um modelo de linguagem?`<|endofinstruction|>`Um modelo de linguagem é uma distribuição de probabilidade sobre um vocabulário.`<|endofcompletion|>` |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
import torch |
|
|
|
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") |
|
|
|
tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/Aira-2-portuguese-124M') |
|
aira = AutoModelForCausalLM.from_pretrained('nicholasKluge/Aira-2-portuguese-124M') |
|
|
|
aira.eval() |
|
aira.to(device) |
|
|
|
question = input("Enter your question: ") |
|
|
|
inputs = tokenizer(tokenizer.bos_token + question + tokenizer.eos_token, return_tensors="pt").to(device) |
|
|
|
responses = aira.generate(**inputs, |
|
bos_token_id=tokenizer.bos_token_id, |
|
pad_token_id=tokenizer.pad_token_id, |
|
eos_token_id=tokenizer.eos_token_id, |
|
do_sample=True, |
|
top_k=50, |
|
max_length=200, |
|
top_p=0.95, |
|
temperature=0.7, |
|
num_return_sequences=2) |
|
|
|
print(f"Question: 👤 {question}\n") |
|
|
|
for i, response in enumerate(responses): |
|
print(f'Response {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}') |
|
``` |
|
|
|
The model will output something like: |
|
|
|
```markdown |
|
>>> Question: 👤 Qual a capital do Brasil? |
|
|
|
>>>Response 1: 🤖 A capital do Brasil é Brasília. |
|
>>>Response 2: 🤖 A capital do Brasil é Brasília. |
|
``` |
|
|
|
## Limitations |
|
|
|
🤥 Generative models can perpetuate the generation of pseudo-informative content, that is, false information that may appear truthful. |
|
|
|
🤬 In certain types of tasks, generative models can produce harmful and discriminatory content inspired by historical stereotypes. |
|
|
|
## Cite as 🤗 |
|
|
|
```latex |
|
|
|
@misc{nicholas22aira, |
|
doi = {10.5281/zenodo.6989727}, |
|
url = {https://huggingface.co/nicholasKluge/Aira-Instruct-PT-124M}, |
|
author = {Nicholas Kluge Corrêa and Carolina Del Pino}, |
|
title = {Aira}, |
|
year = {2023}, |
|
publisher = {HuggingFace}, |
|
journal = {HuggingFace repository}, |
|
} |
|
|
|
``` |
|
|
|
## License |
|
|
|
The `Aira-2-portuguese-124M` is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details. |
|
|