aarticerebras
commited on
Commit
•
724d984
1
Parent(s):
a4065fd
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
{}
|
3 |
+
---
|
4 |
+
# Model Card for Cerebras-ViT-L-336-patch14-llava13b-ShareGPT4V
|
5 |
+
The checkpoints here are for the vision encoder part of **cerebras/Cerebras-LLaVA-13B**.
|
6 |
+
|
7 |
+
**Note**: _ShareGPT4V_ is added to the model name to ensure correct loading of checkpoints in [LLaVA source repo](https://github.com/haotian-liu/LLaVA/blob/main/llava/model/multimodal_encoder/builder.py#L8)
|
8 |
+
|
9 |
+
For full details of this model and training details, please read our upcoming blog post.
|
10 |
+
|
11 |
+
## License:
|
12 |
+
Attribution-NonCommercial 4.0 International It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use
|
13 |
+
|
14 |
+
## Model Architecture
|
15 |
+
Cerebras-ViT-L-336-patch14-llava13b-ShareGPT4V is a transformer model based on CLIP-VisionModel-Large(openai/clip-vit-large-patch14-336). It handles images of size 336 x 336 with patch size of 14
|
16 |
+
|
17 |
+
## Intended Use
|
18 |
+
_Primary intended uses_: The primary use of LLaVA is research on large multimodal models and chatbots.
|
19 |
+
|
20 |
+
_Primary intended users_: The primary intended users of the model are researchers(both academic and industry) in computer vision, natural language processing, machine learning, and artificial intelligence
|
21 |
+
|
22 |
+
## Limitations and Bias
|
23 |
+
The pre-training dataset may have contained offensive or inappropriate content, even after applying data cleansing filters, which can be reflected in the model-generated text. We recommend that users exercise caution when using these models for their applications or any use case that may cause deliberate or unintentional harm to others. This model is for demonstration purpose only.
|