End of training
Browse files- README.md +110 -0
- adapter_model.safetensors +1 -1
README.md
ADDED
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: unsloth/mistral-7b-v0.3
|
3 |
+
library_name: peft
|
4 |
+
license: apache-2.0
|
5 |
+
tags:
|
6 |
+
- unsloth
|
7 |
+
- generated_from_trainer
|
8 |
+
model-index:
|
9 |
+
- name: Mistral-7B-v0.3_pct_ortho_r16
|
10 |
+
results: []
|
11 |
+
---
|
12 |
+
|
13 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
14 |
+
should probably proofread and complete it, then remove this comment. -->
|
15 |
+
|
16 |
+
# Mistral-7B-v0.3_pct_ortho_r16
|
17 |
+
|
18 |
+
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on an unknown dataset.
|
19 |
+
It achieves the following results on the evaluation set:
|
20 |
+
- Loss: 2.0091
|
21 |
+
|
22 |
+
## Model description
|
23 |
+
|
24 |
+
More information needed
|
25 |
+
|
26 |
+
## Intended uses & limitations
|
27 |
+
|
28 |
+
More information needed
|
29 |
+
|
30 |
+
## Training and evaluation data
|
31 |
+
|
32 |
+
More information needed
|
33 |
+
|
34 |
+
## Training procedure
|
35 |
+
|
36 |
+
### Training hyperparameters
|
37 |
+
|
38 |
+
The following hyperparameters were used during training:
|
39 |
+
- learning_rate: 0.0001
|
40 |
+
- train_batch_size: 2
|
41 |
+
- eval_batch_size: 2
|
42 |
+
- seed: 42
|
43 |
+
- gradient_accumulation_steps: 32
|
44 |
+
- total_train_batch_size: 64
|
45 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
46 |
+
- lr_scheduler_type: cosine
|
47 |
+
- lr_scheduler_warmup_ratio: 0.02
|
48 |
+
- num_epochs: 1
|
49 |
+
|
50 |
+
### Training results
|
51 |
+
|
52 |
+
| Training Loss | Epoch | Step | Validation Loss |
|
53 |
+
|:-------------:|:------:|:----:|:---------------:|
|
54 |
+
| 1.9566 | 0.0206 | 8 | 2.0118 |
|
55 |
+
| 2.0191 | 0.0413 | 16 | 1.9983 |
|
56 |
+
| 2.0779 | 0.0619 | 24 | 2.0212 |
|
57 |
+
| 2.0339 | 0.0825 | 32 | 2.0205 |
|
58 |
+
| 2.0429 | 0.1032 | 40 | 2.0132 |
|
59 |
+
| 2.0601 | 0.1238 | 48 | 2.0219 |
|
60 |
+
| 2.041 | 0.1445 | 56 | 2.0171 |
|
61 |
+
| 2.0602 | 0.1651 | 64 | 2.0230 |
|
62 |
+
| 2.0341 | 0.1857 | 72 | 2.0311 |
|
63 |
+
| 2.0378 | 0.2064 | 80 | 2.0319 |
|
64 |
+
| 2.0961 | 0.2270 | 88 | 2.0402 |
|
65 |
+
| 2.106 | 0.2476 | 96 | 2.0208 |
|
66 |
+
| 2.1219 | 0.2683 | 104 | 2.0328 |
|
67 |
+
| 2.0569 | 0.2889 | 112 | 2.0528 |
|
68 |
+
| 2.1062 | 0.3096 | 120 | 2.0355 |
|
69 |
+
| 2.0522 | 0.3302 | 128 | 2.0365 |
|
70 |
+
| 2.0631 | 0.3508 | 136 | 2.0300 |
|
71 |
+
| 2.1052 | 0.3715 | 144 | 2.0409 |
|
72 |
+
| 2.0875 | 0.3921 | 152 | 2.0454 |
|
73 |
+
| 2.0854 | 0.4127 | 160 | 2.0273 |
|
74 |
+
| 2.0533 | 0.4334 | 168 | 2.0529 |
|
75 |
+
| 2.1096 | 0.4540 | 176 | 2.0373 |
|
76 |
+
| 2.0288 | 0.4746 | 184 | 2.0289 |
|
77 |
+
| 2.1344 | 0.4953 | 192 | 2.0375 |
|
78 |
+
| 2.0952 | 0.5159 | 200 | 2.0445 |
|
79 |
+
| 2.0613 | 0.5366 | 208 | 2.0374 |
|
80 |
+
| 2.0441 | 0.5572 | 216 | 2.0225 |
|
81 |
+
| 2.0493 | 0.5778 | 224 | 2.0380 |
|
82 |
+
| 2.0568 | 0.5985 | 232 | 2.0219 |
|
83 |
+
| 2.0477 | 0.6191 | 240 | 2.0261 |
|
84 |
+
| 2.1065 | 0.6397 | 248 | 2.0310 |
|
85 |
+
| 2.0245 | 0.6604 | 256 | 2.0208 |
|
86 |
+
| 2.1013 | 0.6810 | 264 | 2.0270 |
|
87 |
+
| 2.0356 | 0.7017 | 272 | 2.0205 |
|
88 |
+
| 2.0815 | 0.7223 | 280 | 2.0117 |
|
89 |
+
| 2.0898 | 0.7429 | 288 | 2.0175 |
|
90 |
+
| 2.0529 | 0.7636 | 296 | 2.0171 |
|
91 |
+
| 2.0281 | 0.7842 | 304 | 2.0134 |
|
92 |
+
| 2.0473 | 0.8048 | 312 | 2.0150 |
|
93 |
+
| 2.0315 | 0.8255 | 320 | 2.0088 |
|
94 |
+
| 2.0215 | 0.8461 | 328 | 2.0071 |
|
95 |
+
| 2.0003 | 0.8667 | 336 | 2.0093 |
|
96 |
+
| 2.0561 | 0.8874 | 344 | 2.0136 |
|
97 |
+
| 2.0407 | 0.9080 | 352 | 2.0132 |
|
98 |
+
| 2.0257 | 0.9287 | 360 | 2.0105 |
|
99 |
+
| 2.0294 | 0.9493 | 368 | 2.0090 |
|
100 |
+
| 2.0321 | 0.9699 | 376 | 2.0089 |
|
101 |
+
| 2.0516 | 0.9906 | 384 | 2.0091 |
|
102 |
+
|
103 |
+
|
104 |
+
### Framework versions
|
105 |
+
|
106 |
+
- PEFT 0.12.0
|
107 |
+
- Transformers 4.44.0
|
108 |
+
- Pytorch 2.4.0+cu121
|
109 |
+
- Datasets 2.20.0
|
110 |
+
- Tokenizers 0.19.1
|
adapter_model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 167832240
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:50afd88762339fa8315ef9a5d97fdcb0c5f9ef9d2a0c553209e177b0c9802d7d
|
3 |
size 167832240
|