Update config.json
Browse filesmodel_type: "llava_llama"to "llava"
```
ValueError: The checkpoint you are trying to load has model type llava_llama but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
```
- config.json +1 -1
config.json
CHANGED
@@ -28,7 +28,7 @@
|
|
28 |
"mm_vision_select_feature": "patch",
|
29 |
"mm_vision_select_layer": -2,
|
30 |
"mm_vision_tower": "openai/clip-vit-large-patch14-336",
|
31 |
-
"model_type": "
|
32 |
"num_attention_heads": 32,
|
33 |
"num_hidden_layers": 32,
|
34 |
"num_key_value_heads": 8,
|
|
|
28 |
"mm_vision_select_feature": "patch",
|
29 |
"mm_vision_select_layer": -2,
|
30 |
"mm_vision_tower": "openai/clip-vit-large-patch14-336",
|
31 |
+
"model_type": "llava",
|
32 |
"num_attention_heads": 32,
|
33 |
"num_hidden_layers": 32,
|
34 |
"num_key_value_heads": 8,
|