--- tags: - generated_from_trainer model-index: - name: SmolLM-Ora results: [] --- [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
See axolotl config axolotl version: `0.4.0` ```yaml base_model: /media/renfroe/llms/SmolLM-360M/ model_type: LlamaForCausalLM tokenizer_type: GPT2Tokenizer seed: 122887 load_in_8bit: false load_in_4bit: false strict: false max_steps: 0 resume_from_checkpoint: datasets: - path: /home/renfroe/Desktop/sqa_tiny-llama_dataset/Dynamic_Optimization_Methods_with_Applications_sqa_answers_only.json type: field_instruction: question field_output: answer format: "<|im_start|>user\n{instruction}<|im_end|>\n<|im_start|>assistant\n" no_input_format: "<|im_start|>user\n{instruction}<|im_end|>\n<|im_start|>assistant\n" - path: /home/renfroe/Dev/tinyllama-models/dataset/open_hermes_top_tech.json type: sharegpt - path: /home/renfroe/Desktop/sqa_tiny-llama_dataset/hermes_prior_knowledge_question_expansion_with_answers.json type: field_instruction: question field_output: answer format: "<|im_start|>user\n{instruction}<|im_end|>\n<|im_start|>assistant\n" no_input_format: "<|im_start|>user\n{instruction}<|im_end|>\n<|im_start|>assistant\n" - path: /home/renfroe/Desktop/sqa_tiny-llama_dataset/hermes_prior_knowledge_question_expansion_with_answers.json type: field_instruction: question field_output: answer format: "<|im_start|>user\n{instruction}<|im_end|>\n<|im_start|>assistant\n" no_input_format: "<|im_start|>user\n{instruction}<|im_end|>\n<|im_start|>assistant\n" - path: /home/renfroe/Desktop/sqa_tiny-llama_dataset/or-farm_sharegpt.json type: sharegpt dataset_prepared_path: val_set_size: 0.2 output_dir: ./SmolLM-Ora auto_resume_from_checkpoints: false sequence_len: 2048 sample_packing: true chat_template: chatml wandb_project: SmolLM-Ora wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 1 micro_batch_size: 10 num_epochs: 4 optimizer: adamw_bnb_8bit lr_scheduler: linear weight_decay: 0.0000001 learning_rate: 0.0001 lr_scheduler_kwargs: # num_cycles: 3 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true eval_sample_packing: False warmup_steps: 50 evals_per_epoch: 4 eval_table_size: saves_per_epoch: 4 debug: deepspeed: fsdp: fsdp_config: special_tokens: bos_token: "<|endoftext|>" eos_token: "<|endoftext|>" pad_token: "<|endoftext|>" ```

# SmolLM-Ora This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8298 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 10 - eval_batch_size: 10 - seed: 122887 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.0131 | 0.01 | 1 | 1.0419 | | 0.9727 | 0.25 | 27 | 0.9962 | | 0.953 | 0.5 | 54 | 0.9076 | | 0.8494 | 0.75 | 81 | 0.8792 | | 0.9297 | 1.0 | 108 | 0.8632 | | 0.8801 | 1.22 | 135 | 0.8527 | | 0.8133 | 1.47 | 162 | 0.8459 | | 0.8342 | 1.72 | 189 | 0.8410 | | 0.8973 | 1.97 | 216 | 0.8376 | | 0.7731 | 2.19 | 243 | 0.8350 | | 0.8207 | 2.44 | 270 | 0.8332 | | 0.7963 | 2.69 | 297 | 0.8318 | | 0.81 | 2.94 | 324 | 0.8309 | | 0.8351 | 3.18 | 351 | 0.8302 | | 0.8104 | 3.43 | 378 | 0.8299 | | 0.9019 | 3.68 | 405 | 0.8298 | | 0.7828 | 3.93 | 432 | 0.8298 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.18.0 - Tokenizers 0.15.0