T5 Base Art Generation Multi-Instruct OpenVINO
OpenVINO version of Mitchins/t5-base-artgen-multi-instruct for optimized Intel hardware inference.
Model Details
- Base Model: T5-base (Google)
- Training Samples: 297,282
- Parameters: 222M
- Format: OpenVINO IR (FP32)
- Optimization: Intel CPU/GPU/VPU optimized
Quad-Instruction Capabilities
- Standard Enhancement:
Enhance this prompt: {text}
- Clean Enhancement:
Enhance this prompt (no lora): {text}
- Technical Enhancement:
Enhance this prompt (with lora): {text}
- Simplification:
Simplify this prompt: {text}
Usage
from optimum.intel import OVModelForSeq2SeqLM
from transformers import T5Tokenizer
# Load OpenVINO model
model = OVModelForSeq2SeqLM.from_pretrained("Mitchins/t5-base-artgen-multi-instruct-OpenVINO")
tokenizer = T5Tokenizer.from_pretrained("Mitchins/t5-base-artgen-multi-instruct-OpenVINO")
# Example usage
text = "woman in red dress"
prompt = f"Enhance this prompt (no lora): {text}"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=80)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
Performance
Optimized for Intel hardware (CPU, integrated GPU, VPU) with significant speedup compared to standard PyTorch inference.
Deployment
Perfect for Intel NUC and other Intel-based edge devices.
- Downloads last month
- 4
Model tree for Mitchins/t5-base-artgen-multi-instruct-OpenVINO
Base model
google-t5/t5-base
Finetuned
Mitchins/t5-base-artgen-multi-instruct