File size: 805 Bytes
6ec094d 8b519ee 6ec094d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
# LoRA Adapter Model
This is a LoRA adapter model fine-tuned on llava-hf/llava-1.5-7b-hf.
## Model Details
- Base Model: llava-hf/llava-1.5-7b-hf
- Training Parameters:
- Learning Rate: 1e-4
- Batch Size: 16
- Training Steps: 58
## Usage
```python
from transformers import LlavaForConditionalGeneration, AutoProcessor
from peft import PeftModel
import torch
# Load base model
base_model = LlavaForConditionalGeneration.from_pretrained(
"llava-hf/llava-1.5-7b-hf",
revision='a272c74',
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoProcessor.from_pretrained("llava-hf/llava-1.5-7b-hf", revision='a272c74')
# Load LoRA adapter
model = PeftModel.from_pretrained(
base_model,
"Dipto084/RepLLaVA4",
torch_dtype=torch.float16,
device_map="auto"
)
```
|