AlekseyElygin commited on
Commit
f1e9c04
·
verified ·
1 Parent(s): f001959

End of training

Browse files
Files changed (2) hide show
  1. README.md +10 -7
  2. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -19,7 +19,7 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [unsloth/llama-3.2-1b-instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3.2-1b-instruct-bnb-4bit) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 1.9095
23
 
24
  ## Model description
25
 
@@ -39,14 +39,14 @@ More information needed
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 0.0002
42
- - train_batch_size: 8
43
  - eval_batch_size: 8
44
  - seed: 3407
45
  - gradient_accumulation_steps: 4
46
- - total_train_batch_size: 32
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
- - lr_scheduler_warmup_steps: 50
50
  - num_epochs: 1
51
  - mixed_precision_training: Native AMP
52
 
@@ -54,9 +54,12 @@ The following hyperparameters were used during training:
54
 
55
  | Training Loss | Epoch | Step | Validation Loss |
56
  |:-------------:|:------:|:----:|:---------------:|
57
- | 2.0803 | 0.3185 | 50 | 2.0749 |
58
- | 1.945 | 0.6369 | 100 | 1.9421 |
59
- | 1.9269 | 0.9554 | 150 | 1.9095 |
 
 
 
60
 
61
 
62
  ### Framework versions
 
19
 
20
  This model is a fine-tuned version of [unsloth/llama-3.2-1b-instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3.2-1b-instruct-bnb-4bit) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 1.8254
23
 
24
  ## Model description
25
 
 
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 0.0002
42
+ - train_batch_size: 4
43
  - eval_batch_size: 8
44
  - seed: 3407
45
  - gradient_accumulation_steps: 4
46
+ - total_train_batch_size: 16
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
+ - lr_scheduler_warmup_steps: 100
50
  - num_epochs: 1
51
  - mixed_precision_training: Native AMP
52
 
 
54
 
55
  | Training Loss | Epoch | Step | Validation Loss |
56
  |:-------------:|:------:|:----:|:---------------:|
57
+ | 1.8725 | 0.1594 | 50 | 1.9015 |
58
+ | 1.8203 | 0.3187 | 100 | 1.8936 |
59
+ | 1.766 | 0.4781 | 150 | 1.8729 |
60
+ | 1.8071 | 0.6375 | 200 | 1.8494 |
61
+ | 1.7771 | 0.7968 | 250 | 1.8358 |
62
+ | 1.8222 | 0.9562 | 300 | 1.8254 |
63
 
64
 
65
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1c1ab30a34e03a626b146eb47fa07c2530fc22c29f852827809ad2fb9a328747
3
  size 45118424
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ac7cfb78eb08b39cbe63df18394693b950a8c621294f2d4093c357e68b35939
3
  size 45118424