[INFO|configuration_utils.py:672] 2024-10-16 09:49:02,201 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/config.json [INFO|configuration_utils.py:739] 2024-10-16 09:49:02,204 >> Model config MistralConfig { "_name_or_path": "alexsherstinsky/Mistral-7B-v0.1-sharded", "architectures": [ "MistralForCausalLM" ], "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 32768, "model_type": "mistral", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "rms_norm_eps": 1e-05, "rope_theta": 10000.0, "sliding_window": 4096, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.45.0", "use_cache": true, "vocab_size": 32000 } [INFO|tokenization_utils_base.py:2214] 2024-10-16 09:49:02,490 >> loading file tokenizer.model from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/tokenizer.model [INFO|tokenization_utils_base.py:2214] 2024-10-16 09:49:02,490 >> loading file tokenizer.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/tokenizer.json [INFO|tokenization_utils_base.py:2214] 2024-10-16 09:49:02,490 >> loading file added_tokens.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/added_tokens.json [INFO|tokenization_utils_base.py:2214] 2024-10-16 09:49:02,490 >> loading file special_tokens_map.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/special_tokens_map.json [INFO|tokenization_utils_base.py:2214] 2024-10-16 09:49:02,490 >> loading file tokenizer_config.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/tokenizer_config.json [INFO|configuration_utils.py:672] 2024-10-16 09:49:04,482 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/config.json [INFO|configuration_utils.py:739] 2024-10-16 09:49:04,483 >> Model config MistralConfig { "_name_or_path": "alexsherstinsky/Mistral-7B-v0.1-sharded", "architectures": [ "MistralForCausalLM" ], "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 32768, "model_type": "mistral", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "rms_norm_eps": 1e-05, "rope_theta": 10000.0, "sliding_window": 4096, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.45.0", "use_cache": true, "vocab_size": 32000 } [INFO|tokenization_utils_base.py:2214] 2024-10-16 09:49:04,718 >> loading file tokenizer.model from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/tokenizer.model [INFO|tokenization_utils_base.py:2214] 2024-10-16 09:49:04,718 >> loading file tokenizer.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/tokenizer.json [INFO|tokenization_utils_base.py:2214] 2024-10-16 09:49:04,718 >> loading file added_tokens.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/added_tokens.json [INFO|tokenization_utils_base.py:2214] 2024-10-16 09:49:04,718 >> loading file special_tokens_map.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/special_tokens_map.json [INFO|tokenization_utils_base.py:2214] 2024-10-16 09:49:04,718 >> loading file tokenizer_config.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/tokenizer_config.json [INFO|configuration_utils.py:672] 2024-10-16 09:49:09,937 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/config.json [INFO|configuration_utils.py:739] 2024-10-16 09:49:09,939 >> Model config MistralConfig { "_name_or_path": "alexsherstinsky/Mistral-7B-v0.1-sharded", "architectures": [ "MistralForCausalLM" ], "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 32768, "model_type": "mistral", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "rms_norm_eps": 1e-05, "rope_theta": 10000.0, "sliding_window": 4096, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.45.0", "use_cache": true, "vocab_size": 32000 } [INFO|modeling_utils.py:3726] 2024-10-16 09:49:09,998 >> loading weights file model.safetensors from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/model.safetensors.index.json [INFO|modeling_utils.py:1622] 2024-10-16 09:49:10,000 >> Instantiating MistralForCausalLM model under default dtype torch.bfloat16. [INFO|configuration_utils.py:1099] 2024-10-16 09:49:10,000 >> Generate config GenerationConfig { "bos_token_id": 1, "eos_token_id": 2 } [INFO|modeling_utils.py:4568] 2024-10-16 09:55:28,695 >> All model checkpoint weights were used when initializing MistralForCausalLM. [INFO|modeling_utils.py:4576] 2024-10-16 09:55:28,695 >> All the weights of MistralForCausalLM were initialized from the model checkpoint at alexsherstinsky/Mistral-7B-v0.1-sharded. If your task is similar to the task the model of the checkpoint was trained on, you can already use MistralForCausalLM for predictions without further training. [INFO|configuration_utils.py:1054] 2024-10-16 09:55:29,047 >> loading configuration file generation_config.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/generation_config.json [INFO|configuration_utils.py:1099] 2024-10-16 09:55:29,048 >> Generate config GenerationConfig { "bos_token_id": 1, "eos_token_id": 2 } [INFO|trainer.py:667] 2024-10-16 09:55:29,534 >> Using auto half precision backend [INFO|trainer.py:2243] 2024-10-16 09:55:30,419 >> ***** Running training ***** [INFO|trainer.py:2244] 2024-10-16 09:55:30,419 >> Num examples = 4,244 [INFO|trainer.py:2245] 2024-10-16 09:55:30,419 >> Num Epochs = 6 [INFO|trainer.py:2246] 2024-10-16 09:55:30,419 >> Instantaneous batch size per device = 2 [INFO|trainer.py:2249] 2024-10-16 09:55:30,419 >> Total train batch size (w. parallel, distributed & accumulation) = 32 [INFO|trainer.py:2250] 2024-10-16 09:55:30,419 >> Gradient Accumulation steps = 8 [INFO|trainer.py:2251] 2024-10-16 09:55:30,419 >> Total optimization steps = 792 [INFO|trainer.py:2252] 2024-10-16 09:55:30,422 >> Number of trainable parameters = 20,971,520 [INFO|trainer.py:3705] 2024-10-16 10:07:43,119 >> Saving model checkpoint to saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-100 [INFO|configuration_utils.py:672] 2024-10-16 10:07:44,603 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/config.json [INFO|configuration_utils.py:739] 2024-10-16 10:07:44,605 >> Model config MistralConfig { "_name_or_path": "mistralai/Mistral-7B-v0.1", "architectures": [ "MistralForCausalLM" ], "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 32768, "model_type": "mistral", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "rms_norm_eps": 1e-05, "rope_theta": 10000.0, "sliding_window": 4096, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.45.0", "use_cache": true, "vocab_size": 32000 } [INFO|tokenization_utils_base.py:2649] 2024-10-16 10:07:44,761 >> tokenizer config file saved in saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-100/tokenizer_config.json [INFO|tokenization_utils_base.py:2658] 2024-10-16 10:07:44,761 >> Special tokens file saved in saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-100/special_tokens_map.json [INFO|trainer.py:3705] 2024-10-16 10:19:46,355 >> Saving model checkpoint to saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-200 [INFO|configuration_utils.py:672] 2024-10-16 10:19:46,941 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/config.json [INFO|configuration_utils.py:739] 2024-10-16 10:19:46,942 >> Model config MistralConfig { "_name_or_path": "mistralai/Mistral-7B-v0.1", "architectures": [ "MistralForCausalLM" ], "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 32768, "model_type": "mistral", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "rms_norm_eps": 1e-05, "rope_theta": 10000.0, "sliding_window": 4096, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.45.0", "use_cache": true, "vocab_size": 32000 } [INFO|tokenization_utils_base.py:2649] 2024-10-16 10:19:47,094 >> tokenizer config file saved in saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-200/tokenizer_config.json [INFO|tokenization_utils_base.py:2658] 2024-10-16 10:19:47,094 >> Special tokens file saved in saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-200/special_tokens_map.json [INFO|trainer.py:3705] 2024-10-16 10:32:01,716 >> Saving model checkpoint to saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-300 [INFO|configuration_utils.py:672] 2024-10-16 10:32:02,636 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/config.json [INFO|configuration_utils.py:739] 2024-10-16 10:32:02,637 >> Model config MistralConfig { "_name_or_path": "mistralai/Mistral-7B-v0.1", "architectures": [ "MistralForCausalLM" ], "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 32768, "model_type": "mistral", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "rms_norm_eps": 1e-05, "rope_theta": 10000.0, "sliding_window": 4096, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.45.0", "use_cache": true, "vocab_size": 32000 } [INFO|tokenization_utils_base.py:2649] 2024-10-16 10:32:02,789 >> tokenizer config file saved in saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-300/tokenizer_config.json [INFO|tokenization_utils_base.py:2658] 2024-10-16 10:32:02,790 >> Special tokens file saved in saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-300/special_tokens_map.json [INFO|trainer.py:3705] 2024-10-16 10:44:27,461 >> Saving model checkpoint to saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-400 [INFO|configuration_utils.py:672] 2024-10-16 10:44:28,915 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/config.json [INFO|configuration_utils.py:739] 2024-10-16 10:44:28,917 >> Model config MistralConfig { "_name_or_path": "mistralai/Mistral-7B-v0.1", "architectures": [ "MistralForCausalLM" ], "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 32768, "model_type": "mistral", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "rms_norm_eps": 1e-05, "rope_theta": 10000.0, "sliding_window": 4096, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.45.0", "use_cache": true, "vocab_size": 32000 } [INFO|tokenization_utils_base.py:2649] 2024-10-16 10:44:29,069 >> tokenizer config file saved in saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-400/tokenizer_config.json [INFO|tokenization_utils_base.py:2658] 2024-10-16 10:44:29,070 >> Special tokens file saved in saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-400/special_tokens_map.json [INFO|trainer.py:3705] 2024-10-16 10:56:51,831 >> Saving model checkpoint to saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-500 [INFO|configuration_utils.py:672] 2024-10-16 10:56:52,413 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/config.json [INFO|configuration_utils.py:739] 2024-10-16 10:56:52,414 >> Model config MistralConfig { "_name_or_path": "mistralai/Mistral-7B-v0.1", "architectures": [ "MistralForCausalLM" ], "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 32768, "model_type": "mistral", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "rms_norm_eps": 1e-05, "rope_theta": 10000.0, "sliding_window": 4096, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.45.0", "use_cache": true, "vocab_size": 32000 } [INFO|tokenization_utils_base.py:2649] 2024-10-16 10:56:52,570 >> tokenizer config file saved in saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-500/tokenizer_config.json [INFO|tokenization_utils_base.py:2658] 2024-10-16 10:56:52,570 >> Special tokens file saved in saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-500/special_tokens_map.json [INFO|trainer.py:3705] 2024-10-16 11:08:56,973 >> Saving model checkpoint to saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-600 [INFO|configuration_utils.py:672] 2024-10-16 11:08:57,551 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/config.json [INFO|configuration_utils.py:739] 2024-10-16 11:08:57,552 >> Model config MistralConfig { "_name_or_path": "mistralai/Mistral-7B-v0.1", "architectures": [ "MistralForCausalLM" ], "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 32768, "model_type": "mistral", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "rms_norm_eps": 1e-05, "rope_theta": 10000.0, "sliding_window": 4096, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.45.0", "use_cache": true, "vocab_size": 32000 } [INFO|tokenization_utils_base.py:2649] 2024-10-16 11:08:57,705 >> tokenizer config file saved in saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-600/tokenizer_config.json [INFO|tokenization_utils_base.py:2658] 2024-10-16 11:08:57,706 >> Special tokens file saved in saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-600/special_tokens_map.json [INFO|trainer.py:3705] 2024-10-16 11:20:56,378 >> Saving model checkpoint to saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-700 [INFO|configuration_utils.py:672] 2024-10-16 11:20:57,345 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/config.json [INFO|configuration_utils.py:739] 2024-10-16 11:20:57,346 >> Model config MistralConfig { "_name_or_path": "mistralai/Mistral-7B-v0.1", "architectures": [ "MistralForCausalLM" ], "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 32768, "model_type": "mistral", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "rms_norm_eps": 1e-05, "rope_theta": 10000.0, "sliding_window": 4096, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.45.0", "use_cache": true, "vocab_size": 32000 } [INFO|tokenization_utils_base.py:2649] 2024-10-16 11:20:57,500 >> tokenizer config file saved in saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-700/tokenizer_config.json [INFO|tokenization_utils_base.py:2658] 2024-10-16 11:20:57,501 >> Special tokens file saved in saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-700/special_tokens_map.json [INFO|trainer.py:3705] 2024-10-16 11:32:19,002 >> Saving model checkpoint to saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-792 [INFO|configuration_utils.py:672] 2024-10-16 11:32:19,595 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/config.json [INFO|configuration_utils.py:739] 2024-10-16 11:32:19,596 >> Model config MistralConfig { "_name_or_path": "mistralai/Mistral-7B-v0.1", "architectures": [ "MistralForCausalLM" ], "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 32768, "model_type": "mistral", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "rms_norm_eps": 1e-05, "rope_theta": 10000.0, "sliding_window": 4096, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.45.0", "use_cache": true, "vocab_size": 32000 } [INFO|tokenization_utils_base.py:2649] 2024-10-16 11:32:19,750 >> tokenizer config file saved in saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-792/tokenizer_config.json [INFO|tokenization_utils_base.py:2658] 2024-10-16 11:32:19,750 >> Special tokens file saved in saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/checkpoint-792/special_tokens_map.json [INFO|trainer.py:2505] 2024-10-16 11:32:19,978 >> Training completed. Do not forget to share your model on huggingface.co/models =) [INFO|trainer.py:3705] 2024-10-16 11:32:19,980 >> Saving model checkpoint to saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21 [INFO|configuration_utils.py:672] 2024-10-16 11:32:20,485 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--alexsherstinsky--Mistral-7B-v0.1-sharded/snapshots/c7551c3b2d6a1d1b54b8ab5440c5d5c28ede15b9/config.json [INFO|configuration_utils.py:739] 2024-10-16 11:32:20,485 >> Model config MistralConfig { "_name_or_path": "mistralai/Mistral-7B-v0.1", "architectures": [ "MistralForCausalLM" ], "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 32768, "model_type": "mistral", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "rms_norm_eps": 1e-05, "rope_theta": 10000.0, "sliding_window": 4096, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.45.0", "use_cache": true, "vocab_size": 32000 } [INFO|tokenization_utils_base.py:2649] 2024-10-16 11:32:20,622 >> tokenizer config file saved in saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/tokenizer_config.json [INFO|tokenization_utils_base.py:2658] 2024-10-16 11:32:20,622 >> Special tokens file saved in saves/Mistral-7B-v0.1/lora/4k_train_2024-10-16-09-48-21/special_tokens_map.json [INFO|modelcard.py:449] 2024-10-16 11:32:20,750 >> Dropping the following result as it does not have all the necessary fields: {'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}