File size: 1,587 Bytes
7fb4344 39494e2 7fb4344 39494e2 7fb4344 39494e2 7fb4344 39494e2 7fb4344 39494e2 7fb4344 39494e2 7fb4344 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
base_model: unsloth/Meta-Llama-3.1-8B
library_name: peft
license: llama3.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: adventure-nemo-ws
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# Meta-Llama-3.1-8B-Adventure-QLoRA
This LoRA is trained on Llama 3.1 8B **base** using completion format.
The datasets used were:
- Spring Dragon
- Skein
This is not an instruct model and **no instruct format was used.**
The intended use is with text completion where user input is given with `> User Input`. This is the default for Kobold Lite Adventure mode.
If merged into an instruct model, it should impart the flavor of the text adventure data. Use whatever the instruct model's format is for instruct.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_min_lr
- lr_scheduler_warmup_steps: 20
- num_epochs: 1
### Framework versions
- PEFT 0.12.0
- Transformers 4.45.0.dev0
- Pytorch 2.3.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1 |