LoRA Adapter Model
This is a LoRA adapter model fine-tuned on llava-hf/llava-1.5-7b-hf.
Model Details
- Base Model: llava-hf/llava-1.5-7b-hf
- Training Parameters:
- Learning Rate: 1e-4
- Batch Size: 16
- Training Steps: 58
Usage
from transformers import LlavaForConditionalGeneration, AutoProcessor
from peft import PeftModel
import torch
# Load base model
base_model = LlavaForConditionalGeneration.from_pretrained(
"llava-hf/llava-1.5-7b-hf",
revision='a272c74',
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoProcessor.from_pretrained("llava-hf/llava-1.5-7b-hf", revision='a272c74')
# Load LoRA adapter
model = PeftModel.from_pretrained(
base_model,
"Dipto084/RepLLaVA4",
torch_dtype=torch.float16,
device_map="auto"
)