Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

t5-small-negative-prompt-generator

This model t5-small has been finetuned on a subset of the AdamCodd/Civitai-8m-prompts dataset (~800K prompts) focused on the top 10% prompts according to Civitai's positive engagement ("stats" field in the dataset).

It achieves the following results on the evaluation set:

  • Loss: 0.14079
  • Rouge1: 68.7527
  • Rouge2: 53.8612
  • Rougel: 67.3497
  • Rougelsum: 67.3552

The idea behind this is to automatically generate negative prompts that improve the end result according to the positive prompt input. I believe it could be useful to display suggestions for new users who use stable-diffusion or similar.

The license is cc-by-nc-4.0. For commercial use rights, please contact me (adamcoddml@gmail.com).

Usage

The length of the negative prompt is adjustable with the max_new_tokens parameter. The repetition_penalty and no_repeat_ngram_size are both needed as it'll start to repeat itself very quickly without it. You can use temperature and top_k to improve the creativity of the outputs.

from transformers import pipeline

text2text_generator = pipeline("text2text-generation", model="AdamCodd/t5-small-negative-prompt-generator")

generated_text = text2text_generator(
    "masterpiece, 1girl, looking at viewer, sitting, tea, table, garden",
    max_new_tokens=50,
    repetition_penalty=1.2,
    no_repeat_ngram_size=2
)
print(generated_text)
# [{'generated_text': '(worst quality, low quality:1.4), EasyNegative'}]

This model has been trained exclusively on stable-diffusion prompts (SD1.4, SD1.5, SD2.1, SDXL...) so it might not work as well on non-stable-diffusion models.

NB: The dataset includes negative embeddings, so they're present in the output as you can see.

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 16
  • eval_batch_size: 32
  • seed: 42
  • optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
  • Mixed precision
  • num_epochs: 2
  • weight_decay: 0.01

Framework versions

  • Transformers 4.36.2
  • Datasets 2.16.1
  • Tokenizers 0.15.0
  • Evaluate 0.4.1

If you want to support me, you can here.

Downloads last month
0
Safetensors
Model size
60.5M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for AdamCodd/t5-small-negative-prompt-generator

Base model

google-t5/t5-small
Finetuned
(1524)
this model

Dataset used to train AdamCodd/t5-small-negative-prompt-generator

Collection including AdamCodd/t5-small-negative-prompt-generator

Evaluation results