Edit model card

InternViT-6B-448px-V1-2

[πŸ“‚ GitHub] [πŸ†• Blog] [πŸ“œ InternVL 1.0 Paper] [πŸ“œ InternVL 1.5 Report]

[πŸ—¨οΈ Chat Demo] [πŸ€— HF Demo] [πŸš€ Quick Start] [πŸ“– 中文解读] [πŸ“– Documents]

We release our new InternViT weights as InternViT-6B-448px-V1-2. The continuous pre-training of the InternViT-6B model is involved in the InternVL 1.2 update. Specifically, we increased the resolution of InternViT-6B from 224 to 448 and integrated it with Nous-Hermes-2-Yi-34B. To equip the model with high-resolution processing and OCR capabilities, both the vision encoder and the MLP were activated for training, utilizing a mix of image captioning and OCR-specific datasets.

Model Details

  • Model Type: vision foundation model, feature backbone
  • Model Stats:
    • Params (M): 5540 (the last 3 blocks are discarded)
    • Image size: 448 x 448
  • Pretrain Dataset: LAION-en, LAION-zh, COYO, GRIT, COCO, TextCaps, Objects365, OpenImages, All-Seeing, Wukong-OCR, LaionCOCO-OCR, and other OCR-related datasets. To enhance the OCR capability of the model, we have incorporated additional OCR data alongside the general caption datasets. Specifically, we utilized PaddleOCR to perform Chinese OCR on images from Wukong and English OCR on images from LAION-COCO.
  • Note: InternViT-6B originally had 48 blocks, and we found that using the output after the fourth-to-last block worked best for MLLM. For ease of use and to save GPU memory, we simply discarded the last 3 blocks. Now, the model has only 45 blocks and the number of parameters has been reduced from 5.9B to 5.5B. Therefore, if you want to build a MLLM based on this model, please make use of the features from the last layer.

Model Usage (Image Embeddings)

import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor

model = AutoModel.from_pretrained(
    'OpenGVLab/InternViT-6B-448px-V1-2',
    torch_dtype=torch.bfloat16,
    low_cpu_mem_usage=True,
    trust_remote_code=True).cuda().eval()

image = Image.open('./examples/image1.jpg').convert('RGB')

image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternViT-6B-448px-V1-2')

pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()

outputs = model(pixel_values)

Citation

If you find this project useful in your research, please consider citing:

@article{chen2023internvl,
  title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
  author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
  journal={arXiv preprint arXiv:2312.14238},
  year={2023}
}
@article{chen2024far,
  title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
  author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
  journal={arXiv preprint arXiv:2404.16821},
  year={2024}
}
Downloads last month
171
Safetensors
Model size
5.54B params
Tensor type
BF16
Β·
Inference API
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for OpenGVLab/InternViT-6B-448px-V1-2

Finetuned
(1)
this model
Finetunes
1 model
Merges
2 models

Datasets used to train OpenGVLab/InternViT-6B-448px-V1-2

Collection including OpenGVLab/InternViT-6B-448px-V1-2