ToastyPigeon commited on
Commit
39494e2
1 Parent(s): 7fb4344

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -144
README.md CHANGED
@@ -14,146 +14,20 @@ model-index:
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
  [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
17
- <details><summary>See axolotl config</summary>
18
 
19
- axolotl version: `0.4.1`
20
- ```yaml
21
- # python -m axolotl.cli.preprocess adventure-l31.yml
22
- # accelerate launch -m axolotl.cli.train adventure-l31.yml
23
- # python -m axolotl.cli.merge_lora adventure-l31.yml
24
 
25
- base_model: unsloth/Meta-Llama-3.1-8B
26
- model_type: AutoModelForCausalLM
27
- tokenizer_type: AutoTokenizer
28
-
29
- load_in_8bit: false
30
- load_in_4bit: true
31
- strict: false
32
- sequence_len: 8192 # 99% vram
33
- bf16: auto
34
- fp16:
35
- tf32: false
36
- flash_attention: true
37
- special_tokens:
38
-
39
- # Data
40
- dataset_prepared_path: last_run_prepared
41
- datasets:
42
- - path: ColumbidAI/adventure-8k
43
- type: completion
44
- warmup_steps: 20
45
- shuffle_merged_datasets: true
46
-
47
- save_safetensors: true
48
- saves_per_epoch: 4
49
- save_total_limit: 2
50
-
51
- # WandB
52
- wandb_project: L31-A
53
- wandb_entity:
54
-
55
- # Iterations
56
- num_epochs: 1
57
-
58
- # Output
59
- output_dir: ./adventure-command-r-workspace
60
- hub_model_id: ToastyPigeon/adventure-nemo-ws
61
- hub_strategy: "all_checkpoints"
62
-
63
- # Sampling
64
- sample_packing: true
65
- pad_to_sequence_len: true
66
-
67
- # Batching
68
- gradient_accumulation_steps: 2
69
- micro_batch_size: 8
70
- gradient_checkpointing: 'unsloth'
71
- gradient_checkpointing_kwargs:
72
- use_reentrant: true
73
-
74
- #unsloth_cross_entropy_loss: true
75
- #unsloth_lora_mlp: true
76
- #unsloth_lora_qkv: true
77
- #unsloth_lora_o: true
78
-
79
- # Evaluation
80
- val_set_size: 0.01
81
- evals_per_epoch: 5
82
- eval_table_size:
83
- eval_max_new_tokens: 256
84
- eval_sample_packing: false
85
- eval_batch_size: 1
86
-
87
- # LoRA
88
- adapter: qlora
89
- lora_model_dir:
90
- lora_r: 64
91
- lora_alpha: 32
92
- lora_dropout: 0.125
93
- lora_target_linear:
94
- lora_fan_in_fan_out:
95
- lora_target_modules:
96
- - gate_proj
97
- - down_proj
98
- - up_proj
99
- - q_proj
100
- - v_proj
101
- - k_proj
102
- - o_proj
103
- lora_modules_to_save:
104
 
105
- # Optimizer
106
- optimizer: paged_adamw_8bit # adamw_8bit
107
- lr_scheduler: cosine
108
- learning_rate: 0.00005
109
- lr_scheduler: cosine_with_min_lr
110
- lr_scheduler_kwargs:
111
- min_lr: 0.000005
112
- weight_decay: 0.01
113
- max_grad_norm: 20.0
114
 
115
- # Misc
116
- train_on_inputs: false
117
- group_by_length: false
118
- early_stopping_patience:
119
- local_rank:
120
- logging_steps: 1
121
- xformers_attention:
122
- debug:
123
- #deepspeed: /workspace/axolotl/deepspeed_configs/zero3.json # previously blank
124
- fsdp:
125
- fsdp_config:
126
 
 
127
 
128
- plugins:
129
- - axolotl.integrations.liger.LigerPlugin
130
- liger_rope: true
131
- liger_rms_norm: true
132
- liger_swiglu: true
133
- liger_fused_linear_cross_entropy: true
134
- ```
135
-
136
- </details><br>
137
-
138
- # adventure-nemo-ws
139
-
140
- This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on the None dataset.
141
- It achieves the following results on the evaluation set:
142
- - Loss: 2.3893
143
-
144
- ## Model description
145
-
146
- More information needed
147
-
148
- ## Intended uses & limitations
149
-
150
- More information needed
151
-
152
- ## Training and evaluation data
153
-
154
- More information needed
155
-
156
- ## Training procedure
157
 
158
  ### Training hyperparameters
159
 
@@ -169,16 +43,6 @@ The following hyperparameters were used during training:
169
  - lr_scheduler_warmup_steps: 20
170
  - num_epochs: 1
171
 
172
- ### Training results
173
-
174
- | Training Loss | Epoch | Step | Validation Loss |
175
- |:-------------:|:------:|:----:|:---------------:|
176
- | 2.2246 | 0.0045 | 1 | 2.4988 |
177
- | 2.1034 | 0.2013 | 45 | 2.4257 |
178
- | 2.2138 | 0.4027 | 90 | 2.4077 |
179
- | 2.1541 | 0.6040 | 135 | 2.3941 |
180
- | 2.0555 | 0.8054 | 180 | 2.3893 |
181
-
182
 
183
  ### Framework versions
184
 
 
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
  [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
 
17
 
18
+ # Meta-Llama-3.1-8B-Adventure-QLoRA
 
 
 
 
19
 
20
+ This LoRA is trained on Llama 3.1 8B **base** using completion format.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
+ The datasets used were:
23
+ - Spring Dragon
24
+ - Skein
 
 
 
 
 
 
25
 
26
+ This is not an instruct model and **no instruct format was used.**
 
 
 
 
 
 
 
 
 
 
27
 
28
+ The intended use is with text completion where user input is given with `> User Input`. This is the default for Kobold Lite Adventure mode.
29
 
30
+ If merged into an instruct model, it should impart the flavor of the text adventure data. Use whatever the instruct model's format is for instruct.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
  ### Training hyperparameters
33
 
 
43
  - lr_scheduler_warmup_steps: 20
44
  - num_epochs: 1
45
 
 
 
 
 
 
 
 
 
 
 
46
 
47
  ### Framework versions
48