Owen Arliawan commited on
Commit
ca1fe32
1 Parent(s): 926505b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +114 -0
README.md CHANGED
@@ -1,3 +1,117 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement:
5
+ https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b/blob/main/LICENSE
6
+
7
+ We don't know if it is any good since we have not benched this yet but we are happy for anyone to try it out and give some feedback.
8
+ You can try this model on our API at https://www.awanllm.com/
9
+
10
+ Trained on 4096 sequence length, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine.
11
+
12
+ Trained using Cognitive Computations Eric Hartford's https://huggingface.co/datasets/cognitivecomputations/dolphin dataset as I've found great results from their dolphin models in previous Llama models.
13
+
14
+ Trained for 2 days on 2x RTX3090 on our own machine, using 4-bit loading and Qlora 64-rank 128-alpha resulting in 2% trainable weights.
15
+
16
+ The goal for this model is to have the model less-censored and great at general tasks like the previous dolphin models by Eric Hartford.
17
+ We started training this BEFORE they launched their own full weight trained Llama-3-8B-Dolphin-2.9 with their own curated datasets and the newer "Dolphin 2.9" dataset.
18
+ https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b
19
+
20
+ The difference is that we train this using Meta's new Llama 3 instruct format and not the regular ChatML format that Dolphin models are usually trained on.
21
+ Instruct format:
22
+ ```
23
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
24
+
25
+ {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
26
+
27
+ {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
28
+
29
+ {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
30
+
31
+ {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
32
+ ```
33
+
34
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
35
+
36
+ Axolotl Config:
37
+ ```
38
+ base_model: Meta-Llama-3-8B-Instruct
39
+ model_type: LlamaForCausalLM
40
+ tokenizer_type: AutoTokenizer
41
+
42
+ train_on_inputs: false
43
+ group_by_length: false
44
+ load_in_8bit: false
45
+ load_in_4bit: true
46
+ strict: false
47
+ sequence_len: 2048
48
+ bf16: true
49
+ fp16: false
50
+ tf32: false
51
+ flash_attention: true
52
+
53
+ # Data
54
+ datasets:
55
+ - path: flan1m-universal-uncensored-system-2048.jsonl
56
+ type:
57
+ system_prompt: ""
58
+ system_format: "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{system}<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n"
59
+ field_system: system
60
+ field_instruction: input
61
+ field_output: output
62
+ format: "{instruction}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
63
+ no_input_format: "{instruction}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
64
+
65
+ warmup_steps: 10
66
+ dataset_prepared_path: ./last_run_prepared
67
+
68
+ # Iterations
69
+ num_epochs: 1
70
+ saves_per_epoch: 4
71
+
72
+ # Evaluation
73
+ val_set_size: 0.01
74
+ eval_table_size:
75
+ eval_table_max_new_tokens:
76
+ eval_sample_packing: false
77
+ evals_per_epoch: 4
78
+
79
+ # LoRA
80
+ output_dir: ./qlora-out
81
+ adapter: qlora
82
+ lora_model_dir:
83
+ lora_r: 64
84
+ lora_alpha: 128
85
+ lora_dropout: 0.05
86
+ lora_target_linear: true
87
+ lora_fan_in_fan_out:
88
+ lora_target_modules:
89
+ save_safetensors: true
90
+
91
+ # Sampling
92
+ sample_packing: true
93
+ pad_to_sequence_len: true
94
+
95
+ # Batching
96
+ gradient_accumulation_steps: 32
97
+ micro_batch_size: 4
98
+ gradient_checkpointing: true
99
+ gradient_checkpointing_kwargs:
100
+ use_reentrant: true
101
+
102
+ # Optimizer
103
+ optimizer: paged_adamw_8bit
104
+ lr_scheduler: cosine
105
+ learning_rate: 0.0002
106
+
107
+ # Misc
108
+ early_stopping_patience:
109
+ resume_from_checkpoint:
110
+ logging_steps: 1
111
+ debug:
112
+ deepspeed: zero3_bf16.json
113
+ weight_decay: 0.1
114
+ special_tokens:
115
+ pad_token: <|end_of_text|>
116
+ ```
117
+