Usage:
from transformers import BlipProcessor, BlipForConditionalGeneration
import torch
from PIL import Image
processor = BlipProcessor.from_pretrained("prasanna2003/blip-image-captioning")
if processor.tokenizer.eos_token is None:
processor.tokenizer.eos_token = '<|eos|>'
model = BlipForConditionalGeneration.from_pretrained("prasanna2003/blip-image-captioning")
image = Image.open('file_name.jpg').convert('RGB')
prompt = """Instruction: Generate a single line caption of the Image.
output: """
inputs = processor(image, prompt, return_tensors="pt")
output = model.generate(**inputs, max_length=100)
print(processor.tokenizer.decode(output[0]))
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.