t5-reddit-2014 / README.md
Nick Doiron
readme fix and code sample
653ddae
|
raw
history blame
1.36 kB
metadata
language:
  - en
license: apache-2.0
tags:
  - reddit
datasets:
  - georeactor/reddit_one_ups_seq2seq_2014

t5-reddit-2014

T5-small model fine-tuned on Reddit "One-Ups" / "Clapbacks" dataset. Each reply from the fine-tuning has a vote-score 1.5x or higher than the parent comment.

From a few tests it seems to have adopted a snarky tone. Common reply is "I'm not a shit."

Process

Training notebook: https://github.com/Georeactor/reddit-one-ups/blob/main/training-models/t5-seq2seq-2014.ipynb

  • Started with t5-small so I could run it on CoLab.
  • Fine-tuned on first 80% of georeactor/reddit_one_ups_seq2seq_2014 for one epoch, batch size = 2.
  • Loss did not move much during this epoch.
  • Future experiments should use a larger model, larger batch size (could easily have done batch_size = 4 on CoLab), full dataset if we are not worried about eval.

Inference

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained('georeactor/t5-reddit-2014')
tokenizer = AutoTokenizer.from_pretrained('georeactor/t5-reddit-2014')

input = tokenizer.encode('Looks like a potato bug', return_tensors="pt")
output = model.generate(input, max_length=256)
tokenizer.decode(output[0])