license: apache-2.0
datasets:
- nicholasKluge/instruct-aira-dataset
language:
- en
metrics:
- accuracy
library_name: transformers
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
pipeline_tag: text-generation
widget:
- text: <|startofinstruction|>How should I call you?<|endofinstruction|>
example_title: Greetings
- text: >-
<|startofinstruction|>Can you explain what is Machine
Learning?<|endofinstruction|>
example_title: Machine Learning
- text: >-
<|startofinstruction|>Do you know anything about virtue
ethics?<|endofinstruction|>
example_title: Ethics
- text: >-
<|startofinstruction|>How can I make my girlfriend
happy?<|endofinstruction|>
example_title: Advise
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.2
top_k: 30
top_p: 0.3
max_length: 200
length_penalty: 0.3
early_stopping: true
co2_eq_emissions:
emissions: 1.78
source: CodeCarbon
training_type: fine-tuning
geographical_location: United States of America
hardware_used: NVIDIA A100-SXM4-40GB
Aira-2-1B1
Aira-2
is the second version of the Aira instruction-tuned series. Aira-2-1B1
is an instruction-tuned GPT-style model based on TinyLlama-1.1B. The model was trained with a dataset composed of prompts and completions generated synthetically by prompting already-tuned models (ChatGPT, Llama, Open-Assistant, etc).
Check our gradio-demo in Spaces.
Details
- Size: 1,261,545,472 parameters
- Dataset: Instruct-Aira Dataset
- Language: English
- Number of Epochs: 3
- Batch size: 4
- Optimizer:
torch.optim.AdamW
(warmup_steps = 1e2, learning_rate = 5e-4, epsilon = 1e-8) - GPU: 1 NVIDIA A100-SXM4-40GB
- Emissions: 1.78 KgCO2 (Singapore)
- Total Energy Consumption: 3.64 kWh
This repository has the notebook used to train this model.
Usage
Three special tokens are used to mark the user side of the interaction and the model's response:
<|startofinstruction|>
What is a language model?<|endofinstruction|>
A language model is a probability distribution over a vocabulary.<|endofcompletion|>
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/Aira-2-1B1')
aira = AutoModelForCausalLM.from_pretrained('nicholasKluge/Aira-2-1B1')
aira.eval()
aira.to(device)
question = input("Enter your question: ")
inputs = tokenizer(tokenizer.bos_token + question + tokenizer.sep_token, return_tensors="pt").to(device)
responses = aira.generate(**inputs,
bos_token_id=tokenizer.bos_token_id,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
do_sample=True,
top_k=50,
max_length=500,
top_p=0.95,
temperature=0.7,
num_return_sequences=2)
print(f"Question: 👤 {question}\n")
for i, response in enumerate(responses):
print(f'Response {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
The model will output something like:
>>>Question: 👤 What is the capital of Brazil?
>>>Response 1: 🤖 The capital of Brazil is Brasília.
>>>Response 2: 🤖 The capital of Brazil is Brasília.
Limitations
🤥 Generative models can perpetuate the generation of pseudo-informative content, that is, false information that may appear truthful.
🤬 In certain types of tasks, generative models can produce harmful and discriminatory content inspired by historical stereotypes.
Evaluation
Model (TinyLlama) | Average | ARC | TruthfulQA | ToxiGen |
---|---|---|---|---|
Aira-2-1B1 | 42.55 | 25.26 | 50.81 | 51.59 |
TinyLlama-1.1B-intermediate-step-480k-1T | 37.52 | 30.89 | 39.55 | 42.13 |
- Evaluations were performed using the Language Model Evaluation Harness (by EleutherAI). The notebook used to make these evaluations is available in the this repo.
Cite as 🤗
@misc{nicholas22aira,
doi = {10.5281/zenodo.6989727},
url = {https://huggingface.co/nicholasKluge/Aira-2-1B1},
author = {Nicholas Kluge Corrêa},
title = {Aira},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
}
License
The Aira-2-1B1
is licensed under the Apache License, Version 2.0. See the LICENSE file for more details.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 25.19 |
ARC (25-shot) | 23.21 |
HellaSwag (10-shot) | 26.97 |
MMLU (5-shot) | 24.86 |
TruthfulQA (0-shot) | 50.63 |
Winogrande (5-shot) | 50.28 |
GSM8K (5-shot) | 0.0 |
DROP (3-shot) | 0.39 |