sshh12's picture
Upload folder using huggingface_hub
0c876fd
---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.1
dataset: /data/llava-finetune-full
tags:
- finetuned
- multimodal
inference: false
---
These are weights for a version of `mistralai/Mistral-7B-Instruct-v0.1` finetuned for multimodal applications.
### Modalities
* CLIPVisionModality (use `<image>` in text and provide `images`, encoded as 576 tokens)
### Dataset
/data/llava-finetune-full (544610 examples)
```
{'id': '000000033471', 'images': ['/data/llava_finetune_data/images/coco/train2017/train2017/000000033471.jpg'], 'messages': [{'content': '<image>\nWhat are the colors of the bus in the image?', 'role': 'user'}, {'content': 'The bus in the image is white and red.', 'role': 'assistant'}, {'content': 'What feature can be seen on the back of the bus?', 'role': 'user'}, {'content': 'The back of the bus features an advertisement.', 'role': 'assistant'}, {'content': 'Is the bus driving down the street or pulled off to the side?', 'role': 'user'}, {'content': 'The bus is driving down the street, which is crowded with people and other vehicles.', 'role': 'assistant'}]}
```
### Training Device(s)
```
name, pci.bus_id, vbios_version
NVIDIA GeForce RTX 3090 Ti, 00000000:02:00.0, 94.02.a0.00.41
```
### Usage
GitHub: https://github.com/sshh12/multi_token
### Model
```
MistralLMMForCausalLM.model =
PeftModelForCausalLM(
(base_model): LoraModel(
(model): MistralLMMForCausalLM(
(model): MistralLMMModel(
(embed_tokens): Embedding(32000, 4096)
(layers): ModuleList(
(0-31): 32 x MistralDecoderLayer(
(self_attn): MistralAttention(
(q_proj): Linear(
in_features=4096, out_features=4096, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=4096, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=4096, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
)
(k_proj): Linear(
in_features=4096, out_features=1024, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=4096, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=1024, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
)
(v_proj): Linear(
in_features=4096, out_features=1024, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=4096, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=1024, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
)
(o_proj): Linear(
in_features=4096, out_features=4096, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=4096, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=4096, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
)
(rotary_emb): MistralRotaryEmbedding()
)
(mlp): MistralMLP(
(gate_proj): Linear(
in_features=4096, out_features=14336, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=4096, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=14336, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
)
(up_proj): Linear(
in_features=4096, out_features=14336, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=4096, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=14336, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
)
(down_proj): Linear(
in_features=14336, out_features=4096, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.05, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=14336, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=4096, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
)
(act_fn): SiLUActivation()
)
(input_layernorm): MistralRMSNorm()
(post_attention_layernorm): MistralRMSNorm()
)
)
(norm): MistralRMSNorm()
(vision_clip_lmm_projector): Sequential(
(0): Linear(in_features=1024, out_features=4096, bias=True)
(1): GELU(approximate='none')
(2): Linear(in_features=4096, out_features=4096, bias=True)
)
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
)
)
```