File size: 755 Bytes
5cfd2cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
library_name: transformers
tags:
- robotics
- vlm
- image-text-to-text
- multimodal
- pretraining
license: mit
language:
- en
pipeline_tag: image-text-to-text
---

# Prism with Qwen 2.5 0.5B backbone (Prismatic-Compatible Version)

This model is trained on the Llava-1.5-Instruct dataset.

## Usage Instructions

See the [MiniVLA GitHub README](https://github.com/Stanford-ILIAD/openvla-mini/blob/main/README.md) for instructions on how to use this checkpoint for downstream training and finetuning.

## Citation

**BibTeX:**

```bibtex
@article{belkhale24minivla,
    title={MiniVLA: A Better VLA with a Smaller Footprint},
    author={Suneel Belkhale and Dorsa Sadigh},
    url={https://github.com/Stanford-ILIAD/openvla-mini}
    year={2024}
} 
```