omersaidd / Prompt-Enhace-T5-base
This model was trained from scratch on an gokaygokay/prompt-enhancer-dataset dataset.
Bu modelin eğitiminde gokaygokay/prompt-enhancer-dataset veriseti kullanılmşıtır
Model description
This model is trained with the google/t5-base and the database on prompt generation.
Bu model google/t5-base ile prompt üretimek üzerine veriseti ile eğitilmişitir
Intended uses & limitations
More information needed
Training and evaluation data
Kullandığımız verisetimiz gokaygokay/prompt-enhancer-dataset
Our dataset we use gokaygokay/prompt-enhancer-dataset
Training hyperparameters
Eğitim sırasında aşağıdaki hiperparametreler kullanılmıştır:
The following hyperparameters were used during training:
- learning_rate: 3e-6
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
Framework versions
- Transformers 4.43.1
- Pytorch 2.1.2+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
Test Model Code
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
enhancer = pipeline('text2text-generation',
model=model,
tokenizer=tokenizer,
repetition_penalty= 1.2,
device=device)
max_target_length = 256
prefix = "enhance prompt: "
short_prompt = "beautiful house with text 'hello'"
answer = enhancer(prefix + short_prompt, max_length=max_target_length)
final_answer = answer[0]['generated_text']
print(final_answer)
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for omersaidd/Prompt-Enhace-T5-base
Base model
google-t5/t5-base