Edit model card

This model is a Portuguese fine-tuned version of the facebook/opt-125m. It has undergone additional causal language modeling pre-training with a context size of 512, using an extra 300 million tokens in Portuguese (sampled from mc4). The Wandb report is publicly available at here. The code for training using Colab pro (A100 - 40GB) can be found here. The total cost for training this model was R$17.40 or $3.37 USD (as of March 2023).

Deterministic use:

from transformers import pipeline

generator = pipeline('text-generation', model="thiagolaitz/opt-125m-pt-finetuned", max_length=30)
generator("Eles brincaram o dia inteiro sob o sol quente, mas")
# Output: Eles brincaram o dia inteiro sob o sol quente, mas não se deixaram levar pelo sol.

Top-k sampling:

from transformers import pipeline

generator = pipeline('text-generation', model="thiagolaitz/opt-125m-pt-finetuned", do_sample=True, max_length=30)
generator("Eles brincaram o dia inteiro sob o sol quente, mas")
Downloads last month
12
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train thiagolaitz/opt-125m-pt-finetuned