--- license: llama3 base_model: meta-llama/Meta-Llama-3-70B tags: - generated_from_trainer model-index: - name: 70BDOL results: [] --- [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
See axolotl config axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-70B model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer tokenizer_use_fast: false load_in_8bit: false load_in_4bit: false strict: false datasets: - path: datasets/dolphin201-sharegpt2.jsonl type: sharegpt conversation: chatml - path: datasets/dolphin-coder-translate-sharegpt2.jsonl type: sharegpt conversation: chatml - path: datasets/dolphin-coder-codegen-sharegpt2.jsonl type: sharegpt conversation: chatml - path: datasets/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: datasets/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: datasets/not_samantha_norefusals.jsonl type: sharegpt conversation: chatml - path: datasets/Orca-Math-resort-unfiltered.jsonl type: sharegpt conversation: chatml - path: datasets/openhermes200k_unfiltered.jsonl type: sharegpt conversation: chatml chat_template: chatml dataset_prepared_path: 70BDOL val_set_size: 0.0002 output_dir: ./70BDOL sequence_len: 4096 sample_packing: true pad_to_sequence_len: true gradient_accumulation_steps: 4 micro_batch_size: 3 num_epochs: 3 logging_steps: 1 optimizer: adamw_8bit lr_scheduler: cosine learning_rate: 2e-5 wandb_project: 70BDOL wandb_watch: wandb_run_id: wandb_log_model: train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: unsloth gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: 70BDOL/checkpoint-2149 local_rank: logging_steps: 1 xformers_attention: flash_attention: false saves_per_epoch: 5 save_total_limit: 2 save_steps: evals_per_epoch: 5 eval_sample_packing: false debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.05 fsdp: fsdp_config: special_tokens: eos_token: "<|im_end|>" pad_token: "<|end_of_text|>" tokens: - "<|im_start|>" - "<|im_end|>" ```

# 70BDOL This model is a fine-tuned version of [meta-llama/Meta-Llama-3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5272 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 96 - total_eval_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.8626 | 0.0 | 1 | 0.8021 | | 0.5395 | 0.2 | 307 | 0.5590 | | 0.5062 | 0.4 | 614 | 0.5462 | | 0.4612 | 0.6 | 921 | 0.5373 | | 0.4884 | 0.8 | 1228 | 0.5302 | | 0.48 | 1.0 | 1535 | 0.5176 | | 0.3536 | 1.19 | 1842 | 0.5342 | | 0.3205 | 1.39 | 2149 | 0.5311 | | 0.2462 | 1.6 | 2456 | 0.5373 | | 0.2384 | 1.8 | 2763 | 0.5275 | | 0.2594 | 2.0 | 3070 | 0.5196 | | 0.1562 | 2.19 | 3377 | 0.5347 | | 0.1412 | 2.39 | 3684 | 0.5334 | | 0.1468 | 2.59 | 3991 | 0.5276 | | 0.1458 | 2.79 | 4298 | 0.5279 | | 0.1368 | 2.99 | 4605 | 0.5272 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.4.0.dev20240412+rocm6.0 - Datasets 2.15.0 - Tokenizers 0.15.0