Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

LoRA Adapter Model

This is a LoRA adapter model fine-tuned on llava-hf/llava-1.5-7b-hf.

Model Details

  • Base Model: llava-hf/llava-1.5-7b-hf
  • Training Parameters:
    • Learning Rate: 1e-4
    • Batch Size: 16
    • Training Steps: 58

Usage

from transformers import LlavaForConditionalGeneration, AutoProcessor
from peft import PeftModel
import torch

# Load base model
base_model = LlavaForConditionalGeneration.from_pretrained(
    "llava-hf/llava-1.5-7b-hf",
    revision='a272c74',
    torch_dtype=torch.float16,
    device_map="auto"
)
tokenizer = AutoProcessor.from_pretrained("llava-hf/llava-1.5-7b-hf", revision='a272c74')

# Load LoRA adapter
model = PeftModel.from_pretrained(
    base_model,
    "Dipto084/RepLLaVA4",
    torch_dtype=torch.float16,
    device_map="auto"
)
Downloads last month
4
Inference API
Unable to determine this model's library. Check the docs .