File size: 1,197 Bytes
d35c92d
 
58b8706
 
 
 
 
 
 
d35c92d
94d2d17
58b8706
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94d2d17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
license: mit
datasets:
- laion/laion2B-en
- laion/laion-coco
- laion/laion2B-multi
- kakaobrain/coyo-700m
- conceptual_captions
- wanng/wukong100m
---

# Model card for InternViT-6B-224px

## Model Details
- **Model Type:** feature backbone
- **Model Stats:**
  - Params (M): 5903
  - Image size: 224 x 224
- **Papers:**
  - InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks
- **GitHub:**
  - https://github.com/OpenGVLab/InternVL
- **Pretrain Dataset:** LAION-en, LAION-COCO, COYO, CC12M, CC3M, SBU, Wukong, LAION-multi

## Model Usage

### Image Embeddings

```python
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor

model = AutoModel.from_pretrained(
    'OpenGVLab/InternViT-6B-224px',
    torch_dtype=torch.bfloat16,
    low_cpu_mem_usage=True,
    trust_remote_code=True).cuda().eval()

image = Image.open('./examples/image1.jpg').convert('RGB')

image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternViT-6B-224px')

pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()

outputs = model(pixel_values)
```