UsernameJustAnother commited on
Commit
c14f700
1 Parent(s): 3b4ceab

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: unsloth/Mistral-Nemo-Instruct-2407
3
+ language:
4
+ - en
5
+ license: apache-2.0
6
+ tags:
7
+ - text-generation-inference
8
+ - transformers
9
+ - unsloth
10
+ - mistral
11
+ - trl
12
+ - rp
13
+ - gguf
14
+ - experimental
15
+ - long-context
16
+ ---
17
+
18
+ # Uploaded model
19
+
20
+ - **Developed by:** UsernameJustAnother
21
+ - **License:** apache-2.0
22
+ - **Finetuned from model :** unsloth/Mistral-Nemo-Instruct-2407
23
+
24
+ This is an 8_0 gguf of Marlin v6. The notes for Marlin are below.
25
+
26
+ Standard disclaimer: This is me teaching myself the basics of fine-tuning, with notes extensively borrowed from https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9
27
+
28
+ New for v6:
29
+ - Slightly different source mix. Down to 8,000 records of mostly-human convos and stories, curated by me, trained in ChatML.
30
+ - The stories have been edited to remove author's notes, and the RP chats tweaked to remove many ministrations.
31
+ - Different learning rate and back to Celeste's scaling factor setup (but Celeste trained on -base, this is -instruct).
32
+ - Now with added eval! I worked out how to get eval stats (and wandb) set up, so now I can see my failures in graphical form.
33
+
34
+ And of course yay Unsloth for letting this all train on a single A100 with variable (wildly variable) context length.
35
+
36
+ It was trained with the following settings:
37
+
38
+ ```
39
+
40
+ model = FastLanguageModel.get_peft_model(
41
+ model,
42
+ r = 256,
43
+ target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
44
+ "gate_proj", "up_proj", "down_proj",],
45
+ lora_alpha = 128, # 128 / sqrt(256) gives a scaling factor of 8
46
+ lora_dropout = 0.1, # Supports any, but = 0 is optimized
47
+ bias = "none", # Supports any, but = "none" is optimized
48
+ # [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
49
+ use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
50
+ random_state = 3407,
51
+ use_rslora = True, # setting the adapter scaling factor to lora_alpha/math.sqrt(r) instead of lora_alpha/r
52
+ loftq_config = None, # And LoftQ
53
+ )
54
+
55
+ lr_scheduler_kwargs = {
56
+ 'min_lr': 0.0000024 # Adjust this value as needed
57
+ }
58
+
59
+ per_device_train_batch_size = 2,
60
+ per_device_eval_batch_size = 2, # defaults to 8!
61
+ gradient_accumulation_steps = 4,
62
+ eval_accumulation_steps = 4,
63
+ prediction_loss_only = True, # When performing evaluation and generating predictions, only returns the loss.
64
+ warmup_steps = 50,
65
+ num_train_epochs = 2, # For longer training runs! 12 hrs/epoch?
66
+ learning_rate = 1e-5, # 8e-5 used by Celeste, 0.0001 is from the paper, halving it. tried 5e-5, now 1e-5.
67
+ fp16 = not is_bfloat16_supported(),
68
+ bf16 = is_bfloat16_supported(),
69
+ fp16_full_eval = True, # stops eval from trying to use fp32
70
+ eval_strategy = "steps", # 'no', 'steps', 'epoch'. Don't use this without an eval dataset etc
71
+ eval_steps = 100, # is eval_strat is set to 'steps', do every N steps.
72
+ logging_steps = 5, # so eval and logging happen on the same schedule
73
+ optim = "adamw_8bit", #
74
+ weight_decay = 0, # up from 0
75
+ lr_scheduler_type = "cosine_with_min_lr", # linear, cosine, cosine_with_min_lr, default linear
76
+ lr_scheduler_kwargs = lr_scheduler_kwargs, # needed for cosine_with_min_lr
77
+ seed = 3407,
78
+
79
+ ```
80
+
81
+ This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
82
+
83
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)