quip-4k-llama / running_log.txt
Anis1123's picture
Initial model upload
4894c3a verified
raw
history blame
24.5 kB
[INFO|configuration_utils.py:672] 2024-10-16 13:31:32,921 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/config.json
[INFO|configuration_utils.py:739] 2024-10-16 13:31:32,923 >> Model config LlamaConfig {
"_name_or_path": "NousResearch/Hermes-3-Llama-3.1-8B",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128040,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.0",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2214] 2024-10-16 13:31:33,179 >> loading file tokenizer.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/tokenizer.json
[INFO|tokenization_utils_base.py:2214] 2024-10-16 13:31:33,180 >> loading file tokenizer.model from cache at None
[INFO|tokenization_utils_base.py:2214] 2024-10-16 13:31:33,180 >> loading file added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:2214] 2024-10-16 13:31:33,180 >> loading file special_tokens_map.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/special_tokens_map.json
[INFO|tokenization_utils_base.py:2214] 2024-10-16 13:31:33,180 >> loading file tokenizer_config.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/tokenizer_config.json
[INFO|tokenization_utils_base.py:2478] 2024-10-16 13:31:33,694 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[INFO|configuration_utils.py:672] 2024-10-16 13:31:35,153 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/config.json
[INFO|configuration_utils.py:739] 2024-10-16 13:31:35,154 >> Model config LlamaConfig {
"_name_or_path": "NousResearch/Hermes-3-Llama-3.1-8B",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128040,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.0",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2214] 2024-10-16 13:31:35,830 >> loading file tokenizer.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/tokenizer.json
[INFO|tokenization_utils_base.py:2214] 2024-10-16 13:31:35,831 >> loading file tokenizer.model from cache at None
[INFO|tokenization_utils_base.py:2214] 2024-10-16 13:31:35,831 >> loading file added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:2214] 2024-10-16 13:31:35,831 >> loading file special_tokens_map.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/special_tokens_map.json
[INFO|tokenization_utils_base.py:2214] 2024-10-16 13:31:35,831 >> loading file tokenizer_config.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/tokenizer_config.json
[INFO|tokenization_utils_base.py:2478] 2024-10-16 13:31:36,165 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[INFO|configuration_utils.py:672] 2024-10-16 13:31:40,950 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/config.json
[INFO|configuration_utils.py:739] 2024-10-16 13:31:40,952 >> Model config LlamaConfig {
"_name_or_path": "NousResearch/Hermes-3-Llama-3.1-8B",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128040,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.0",
"use_cache": true,
"vocab_size": 128256
}
[INFO|modeling_utils.py:3726] 2024-10-16 13:31:41,010 >> loading weights file model.safetensors from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/model.safetensors.index.json
[INFO|modeling_utils.py:1622] 2024-10-16 13:31:41,012 >> Instantiating LlamaForCausalLM model under default dtype torch.bfloat16.
[INFO|configuration_utils.py:1099] 2024-10-16 13:31:41,013 >> Generate config GenerationConfig {
"bos_token_id": 128000,
"eos_token_id": 128040
}
[INFO|modeling_utils.py:4568] 2024-10-16 13:39:26,337 >> All model checkpoint weights were used when initializing LlamaForCausalLM.
[INFO|modeling_utils.py:4576] 2024-10-16 13:39:26,337 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at NousResearch/Hermes-3-Llama-3.1-8B.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training.
[INFO|configuration_utils.py:1054] 2024-10-16 13:39:26,714 >> loading configuration file generation_config.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/generation_config.json
[INFO|configuration_utils.py:1099] 2024-10-16 13:39:26,714 >> Generate config GenerationConfig {
"bos_token_id": 128000,
"do_sample": true,
"eos_token_id": 128040,
"temperature": 0.6,
"top_p": 0.9
}
[INFO|trainer.py:667] 2024-10-16 13:39:27,203 >> Using auto half precision backend
[INFO|trainer.py:2243] 2024-10-16 13:39:28,204 >> ***** Running training *****
[INFO|trainer.py:2244] 2024-10-16 13:39:28,204 >> Num examples = 4,244
[INFO|trainer.py:2245] 2024-10-16 13:39:28,204 >> Num Epochs = 6
[INFO|trainer.py:2246] 2024-10-16 13:39:28,204 >> Instantaneous batch size per device = 2
[INFO|trainer.py:2249] 2024-10-16 13:39:28,204 >> Total train batch size (w. parallel, distributed & accumulation) = 32
[INFO|trainer.py:2250] 2024-10-16 13:39:28,204 >> Gradient Accumulation steps = 8
[INFO|trainer.py:2251] 2024-10-16 13:39:28,204 >> Total optimization steps = 792
[INFO|trainer.py:2252] 2024-10-16 13:39:28,211 >> Number of trainable parameters = 20,971,520
[INFO|trainer.py:3705] 2024-10-16 13:50:33,869 >> Saving model checkpoint to saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-100
[INFO|configuration_utils.py:672] 2024-10-16 13:50:34,457 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/config.json
[INFO|configuration_utils.py:739] 2024-10-16 13:50:34,458 >> Model config LlamaConfig {
"_name_or_path": "NousResearch/Hermes-3-Llama-3.1-8B",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128040,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.0",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2649] 2024-10-16 13:50:34,614 >> tokenizer config file saved in saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-100/tokenizer_config.json
[INFO|tokenization_utils_base.py:2658] 2024-10-16 13:50:34,615 >> Special tokens file saved in saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-100/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-16 14:01:29,859 >> Saving model checkpoint to saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-200
[INFO|configuration_utils.py:672] 2024-10-16 14:01:30,499 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/config.json
[INFO|configuration_utils.py:739] 2024-10-16 14:01:30,500 >> Model config LlamaConfig {
"_name_or_path": "NousResearch/Hermes-3-Llama-3.1-8B",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128040,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.0",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2649] 2024-10-16 14:01:30,637 >> tokenizer config file saved in saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-200/tokenizer_config.json
[INFO|tokenization_utils_base.py:2658] 2024-10-16 14:01:30,637 >> Special tokens file saved in saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-200/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-16 14:12:39,845 >> Saving model checkpoint to saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-300
[INFO|configuration_utils.py:672] 2024-10-16 14:12:41,404 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/config.json
[INFO|configuration_utils.py:739] 2024-10-16 14:12:41,405 >> Model config LlamaConfig {
"_name_or_path": "NousResearch/Hermes-3-Llama-3.1-8B",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128040,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.0",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2649] 2024-10-16 14:12:41,563 >> tokenizer config file saved in saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-300/tokenizer_config.json
[INFO|tokenization_utils_base.py:2658] 2024-10-16 14:12:41,563 >> Special tokens file saved in saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-300/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-16 14:23:58,146 >> Saving model checkpoint to saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-400
[INFO|configuration_utils.py:672] 2024-10-16 14:23:58,714 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/config.json
[INFO|configuration_utils.py:739] 2024-10-16 14:23:58,715 >> Model config LlamaConfig {
"_name_or_path": "NousResearch/Hermes-3-Llama-3.1-8B",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128040,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.0",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2649] 2024-10-16 14:23:58,874 >> tokenizer config file saved in saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-400/tokenizer_config.json
[INFO|tokenization_utils_base.py:2658] 2024-10-16 14:23:58,875 >> Special tokens file saved in saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-400/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-16 14:35:15,746 >> Saving model checkpoint to saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-500
[INFO|configuration_utils.py:672] 2024-10-16 14:35:16,768 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/config.json
[INFO|configuration_utils.py:739] 2024-10-16 14:35:16,769 >> Model config LlamaConfig {
"_name_or_path": "NousResearch/Hermes-3-Llama-3.1-8B",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128040,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.0",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2649] 2024-10-16 14:35:16,929 >> tokenizer config file saved in saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2658] 2024-10-16 14:35:16,929 >> Special tokens file saved in saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-500/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-16 14:46:13,837 >> Saving model checkpoint to saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-600
[INFO|configuration_utils.py:672] 2024-10-16 14:46:14,835 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/config.json
[INFO|configuration_utils.py:739] 2024-10-16 14:46:14,836 >> Model config LlamaConfig {
"_name_or_path": "NousResearch/Hermes-3-Llama-3.1-8B",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128040,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.0",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2649] 2024-10-16 14:46:14,994 >> tokenizer config file saved in saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-600/tokenizer_config.json
[INFO|tokenization_utils_base.py:2658] 2024-10-16 14:46:14,995 >> Special tokens file saved in saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-600/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-16 14:57:08,664 >> Saving model checkpoint to saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-700
[INFO|configuration_utils.py:672] 2024-10-16 14:57:09,697 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/config.json
[INFO|configuration_utils.py:739] 2024-10-16 14:57:09,698 >> Model config LlamaConfig {
"_name_or_path": "NousResearch/Hermes-3-Llama-3.1-8B",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128040,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.0",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2649] 2024-10-16 14:57:09,860 >> tokenizer config file saved in saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-700/tokenizer_config.json
[INFO|tokenization_utils_base.py:2658] 2024-10-16 14:57:09,860 >> Special tokens file saved in saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-700/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-16 15:07:30,273 >> Saving model checkpoint to saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-792
[INFO|configuration_utils.py:672] 2024-10-16 15:07:30,871 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/config.json
[INFO|configuration_utils.py:739] 2024-10-16 15:07:30,873 >> Model config LlamaConfig {
"_name_or_path": "NousResearch/Hermes-3-Llama-3.1-8B",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128040,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.0",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2649] 2024-10-16 15:07:31,029 >> tokenizer config file saved in saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-792/tokenizer_config.json
[INFO|tokenization_utils_base.py:2658] 2024-10-16 15:07:31,029 >> Special tokens file saved in saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/checkpoint-792/special_tokens_map.json
[INFO|trainer.py:2505] 2024-10-16 15:07:31,384 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
[INFO|trainer.py:3705] 2024-10-16 15:07:31,386 >> Saving model checkpoint to saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59
[INFO|configuration_utils.py:672] 2024-10-16 15:07:33,366 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--NousResearch--Hermes-3-Llama-3.1-8B/snapshots/896ea440e5a9e6070e3d8a2774daf2b481ab425b/config.json
[INFO|configuration_utils.py:739] 2024-10-16 15:07:33,367 >> Model config LlamaConfig {
"_name_or_path": "NousResearch/Hermes-3-Llama-3.1-8B",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128040,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.0",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2649] 2024-10-16 15:07:33,476 >> tokenizer config file saved in saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/tokenizer_config.json
[INFO|tokenization_utils_base.py:2658] 2024-10-16 15:07:33,477 >> Special tokens file saved in saves/LLaMA3.1-8B/lora/4k_train_2024-10-16-13-29-59/special_tokens_map.json
[INFO|modelcard.py:449] 2024-10-16 15:07:33,708 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}