AlekseyElygin commited on
Commit
ebb0bda
·
verified ·
1 Parent(s): 0f96934

End of training

Browse files
Files changed (2) hide show
  1. README.md +9 -10
  2. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -19,7 +19,7 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [unsloth/llama-3.2-1b-instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3.2-1b-instruct-bnb-4bit) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 1.8254
23
 
24
  ## Model description
25
 
@@ -39,14 +39,14 @@ More information needed
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 0.0002
42
- - train_batch_size: 4
43
  - eval_batch_size: 8
44
  - seed: 3407
45
  - gradient_accumulation_steps: 4
46
- - total_train_batch_size: 16
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
- - lr_scheduler_warmup_steps: 100
50
  - num_epochs: 1
51
  - mixed_precision_training: Native AMP
52
 
@@ -54,12 +54,11 @@ The following hyperparameters were used during training:
54
 
55
  | Training Loss | Epoch | Step | Validation Loss |
56
  |:-------------:|:------:|:----:|:---------------:|
57
- | 1.8725 | 0.1594 | 50 | 1.9015 |
58
- | 1.8203 | 0.3187 | 100 | 1.8936 |
59
- | 1.766 | 0.4781 | 150 | 1.8729 |
60
- | 1.8071 | 0.6375 | 200 | 1.8494 |
61
- | 1.7771 | 0.7968 | 250 | 1.8358 |
62
- | 1.8222 | 0.9562 | 300 | 1.8254 |
63
 
64
 
65
  ### Framework versions
 
19
 
20
  This model is a fine-tuned version of [unsloth/llama-3.2-1b-instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3.2-1b-instruct-bnb-4bit) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 1.7877
23
 
24
  ## Model description
25
 
 
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 0.0002
42
+ - train_batch_size: 5
43
  - eval_batch_size: 8
44
  - seed: 3407
45
  - gradient_accumulation_steps: 4
46
+ - total_train_batch_size: 20
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
+ - lr_scheduler_warmup_steps: 50
50
  - num_epochs: 1
51
  - mixed_precision_training: Native AMP
52
 
 
54
 
55
  | Training Loss | Epoch | Step | Validation Loss |
56
  |:-------------:|:------:|:----:|:---------------:|
57
+ | 1.7636 | 0.1992 | 50 | 1.8301 |
58
+ | 1.6351 | 0.3984 | 100 | 1.8318 |
59
+ | 1.6559 | 0.5976 | 150 | 1.8157 |
60
+ | 1.6928 | 0.7968 | 200 | 1.7976 |
61
+ | 1.7535 | 0.9960 | 250 | 1.7877 |
 
62
 
63
 
64
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8ab402520269402da616dd8cd8f8218c1b61796471dcfa5b5a3db7186d05760a
3
  size 45118424
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:586ca7a974861ee5ca8a07f33cb869b3c1803f9f908eb046d714ee7ee9248956
3
  size 45118424