Text Generation
Transformers
Safetensors
English
omnilmm
conversational
Inference Endpoints
File size: 1,955 Bytes
6f9efaf
 
 
0e6260c
6f9efaf
 
 
 
 
 
fc325d8
6f9efaf
e0c7f4a
0e6260c
e0c7f4a
6f9efaf
 
 
 
fe64c14
0e6260c
e0c7f4a
 
0e6260c
 
34adf95
0e6260c
 
 
 
 
d39cc4e
2235170
0e6260c
 
6f9efaf
0e6260c
6f9efaf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
license: apache-2.0
datasets:
- openbmb/RLAIF-V-Dataset
language:
- en
---

# Model Card for RLAIF-V

[GitHub ](https://github.com/RLHF-V/RLAIF-V)

**RLAIF-V-12B** is a multimodal large language model (MLLM) that exhibits **super GPT-4V trustworthiness**. The model is built up on OmniLMM from the [MiniCPM-V](https://github.com/OpenBMB/MiniCPM-V) series. 

We utilize a novel framework, [RLAIF-V](https://github.com/RLHF-V/RLAIF-V), which **aligns MLLMs in a fully open-source paradigm**. This framework maximally exploits the [open-source feedback](https://huggingface.co/datasets/HaoyeZhang/RLAIF-V-Dataset) from two key perspectives, including **high-quality feedback data** and an **online feedback learning algorithm**. 


## Model Details

### Key Features

* 🏅 **Super GPT-4V Trustworthiness**: By learning from open-source AI feedback, RLAIF-V-12B achieves super GPT-4V trustworthiness in both generative and discriminative tasks.
* 💪 **Maintaining Well Performance on General Abilities**: On benchmarks tested with the general abilities (e.g. LLaVA Bench, MMStar), RLAIF-V-12B also exhibits good performance.

<p align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/6566e0c493e30c8a60048eb3/ypXZxb4HE-jDPJU9115bi.png" alt="fig1" width="90%"/>
</p>
<!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6566e0c493e30c8a60048eb3/ypXZxb4HE-jDPJU9115bi.png) -->

### Examples
<p align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/6566e0c493e30c8a60048eb3/yg-Ksp9qi8AodURSmX769.png" alt="fig2-1" width="81%"/>
  <img src="https://cdn-uploads.huggingface.co/production/uploads/6566e0c493e30c8a60048eb3/NSEkeBmH99B44rX8GTZig.png" alt="fig2-1" width="80%"/>
</p>

### Model Description
- **Related model:** [OmniLMM-12B](https://huggingface.co/openbmb/OmniLMM-12B)
- **Trained on data:** [RLAIF-V-Dataset](https://huggingface.co/datasets/HaoyeZhang/RLAIF-V-Dataset)