Edit model card

t5-reddit-2014

T5-small model fine-tuned on Reddit "One-Ups" / "Clapbacks" dataset. Each reply from the fine-tuning has a vote-score 1.5x or higher than the parent comment.

From a few tests it seems to have adopted a snarky tone. Common reply is "I'm not a shit."

Process

Training notebook: https://github.com/Georeactor/reddit-one-ups/blob/main/training-models/t5-seq2seq-2014.ipynb

  • Started with t5-small so I could run it on CoLab.
  • Fine-tuned on first 80% of georeactor/reddit_one_ups_seq2seq_2014 for one epoch, batch size = 2.
  • Loss did not move much during this epoch.
  • Future experiments should use a larger model, larger batch size (could easily have done batch_size = 4 on CoLab), full dataset if we are not worried about eval.

Inference

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained('georeactor/t5-reddit-2014')
tokenizer = AutoTokenizer.from_pretrained('georeactor/t5-reddit-2014')

input = tokenizer.encode('Looks like a potato bug', return_tensors="pt")
output = model.generate(input, max_length=256)
tokenizer.decode(output[0])
Downloads last month
7
Safetensors
Model size
60.5M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train georeactor/t5-reddit-2014