ToastyPigeon's picture
Update README.md
39494e2 verified
|
raw
history blame
1.59 kB
metadata
base_model: unsloth/Meta-Llama-3.1-8B
library_name: peft
license: llama3.1
tags:
  - axolotl
  - generated_from_trainer
model-index:
  - name: adventure-nemo-ws
    results: []

Built with Axolotl

Meta-Llama-3.1-8B-Adventure-QLoRA

This LoRA is trained on Llama 3.1 8B base using completion format.

The datasets used were:

  • Spring Dragon
  • Skein

This is not an instruct model and no instruct format was used.

The intended use is with text completion where user input is given with > User Input. This is the default for Kobold Lite Adventure mode.

If merged into an instruct model, it should impart the flavor of the text adventure data. Use whatever the instruct model's format is for instruct.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine_with_min_lr
  • lr_scheduler_warmup_steps: 20
  • num_epochs: 1

Framework versions

  • PEFT 0.12.0
  • Transformers 4.45.0.dev0
  • Pytorch 2.3.1+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1