Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
amitha
/
mllava-llama2-en-zh
like
0
Visual Question Answering
Transformers
Safetensors
LinkSoul/Chinese-LLaVA-Vision-Instructions
English
Chinese
llava_llama
llava
vlm
custom_code
arxiv:
2406.11665
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Use this model
main
mllava-llama2-en-zh
1 contributor
History:
8 commits
amitha
Update README.md
d9d0008
verified
5 months ago
.gitattributes
Safe
1.52 kB
initial commit
5 months ago
README.md
Safe
426 Bytes
Update README.md
5 months ago
clip_encoder.py
Safe
3.67 kB
Upload LlavaLlamaForCausalLM
5 months ago
config.json
Safe
1.36 kB
Upload LlavaLlamaForCausalLM
5 months ago
constants.py
Safe
941 Bytes
Upload LlavaLlamaForCausalLM
5 months ago
generation_config.json
Safe
183 Bytes
Upload LlavaLlamaForCausalLM
5 months ago
llava_arch.py
Safe
18.1 kB
Upload LlavaLlamaForCausalLM
5 months ago
llava_llama.py
Safe
5.48 kB
Upload LlavaLlamaForCausalLM
5 months ago
model-00001-of-00003.safetensors
Safe
4.94 GB
LFS
Upload LlavaLlamaForCausalLM
5 months ago
model-00002-of-00003.safetensors
Safe
4.95 GB
LFS
Upload LlavaLlamaForCausalLM
5 months ago
model-00003-of-00003.safetensors
Safe
4.85 GB
LFS
Upload LlavaLlamaForCausalLM
5 months ago
model.safetensors.index.json
Safe
73.2 kB
Upload LlavaLlamaForCausalLM
5 months ago
multimodal_encoder.py
Safe
1.16 kB
Upload LlavaLlamaForCausalLM
5 months ago
multimodal_projector.py
Safe
2.03 kB
Upload LlavaLlamaForCausalLM
5 months ago
utils.py
Safe
8.28 kB
Upload LlavaLlamaForCausalLM
5 months ago