gordonhu commited on
Commit
41dbb98
1 Parent(s): 4f41567

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -9,29 +9,29 @@ library_name: transformers
9
  <br>
10
  <br>
11
 
12
- # LoViM Model Card
13
 
14
  ## Model details
15
 
16
  **Model type:**
17
- LoViM is an open-source Vision-Languagde model trained by initializing from InstructBLIP and alignment with Vicuna on multimodal instruction-finetuning data.
18
  It composes of an EVA-CLIP vision encoder, a Q-Former, a projection layer and an auto-regressive language model, based on the decoder only transformer architecture.
19
 
20
  **Model date:**
21
- LoViM_FlanT5 was trained in July 2023.
22
 
23
  **Paper or resources for more information:**
24
- https://gordonhu608.github.io/lovim/
25
 
26
  **License:**
27
  BSD 3-Clause License
28
 
29
  **Where to send questions or comments about the model:**
30
- https://github.com/mlpc-ucsd/LoViM
31
 
32
  ## Intended use
33
  **Primary intended uses:**
34
- The primary use of LoViM FlanT5 is for commercial use on large multimodal models.
35
 
36
  **Primary intended users:**
37
  The primary intended users of this model is for commercial companies in computer vision, natural language processing, machine learning, and artificial intelligence.
@@ -46,4 +46,4 @@ For zero-shot evaluation on general image task, we selected Nocaps, Flickr30K, V
46
 
47
  For zero-shot evaluation on text-rich image OCR task, we selected ST-VQA, OCR-VQA, Text-VQA, and Doc-VQA.
48
 
49
- More detials are in our github, https://github.com/mlpc-ucsd/LoViM
 
9
  <br>
10
  <br>
11
 
12
+ # BLIVA Model Card
13
 
14
  ## Model details
15
 
16
  **Model type:**
17
+ BLIVA is an open-source Vision-Languagde model trained by initializing from InstructBLIP and alignment with Vicuna on multimodal instruction-finetuning data.
18
  It composes of an EVA-CLIP vision encoder, a Q-Former, a projection layer and an auto-regressive language model, based on the decoder only transformer architecture.
19
 
20
  **Model date:**
21
+ BLIVA_FlanT5 was trained in July 2023.
22
 
23
  **Paper or resources for more information:**
24
+ https://gordonhu608.github.io/BLIVA/
25
 
26
  **License:**
27
  BSD 3-Clause License
28
 
29
  **Where to send questions or comments about the model:**
30
+ https://github.com/mlpc-ucsd/BLIVA
31
 
32
  ## Intended use
33
  **Primary intended uses:**
34
+ The primary use of BLIVA FlanT5 is for commercial use on large multimodal models.
35
 
36
  **Primary intended users:**
37
  The primary intended users of this model is for commercial companies in computer vision, natural language processing, machine learning, and artificial intelligence.
 
46
 
47
  For zero-shot evaluation on text-rich image OCR task, we selected ST-VQA, OCR-VQA, Text-VQA, and Doc-VQA.
48
 
49
+ More detials are in our github, https://github.com/mlpc-ucsd/BLIVA