TaiGary commited on
Commit
3f57035
·
verified ·
1 Parent(s): 436beaf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -73
README.md CHANGED
@@ -1,73 +0,0 @@
1
- ---
2
- library_name: transformers
3
- license: other
4
- base_model: Qwen/Qwen2.5-7B-Instruct
5
- tags:
6
- - llama-factory
7
- - full
8
- - generated_from_trainer
9
- model-index:
10
- - name: 2and3_apps_30k_v6
11
- results: []
12
- ---
13
-
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
- # 2and3_apps_30k_v6
18
-
19
- This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the 2and3_apps_30k_v6 dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 0.1593
22
-
23
- ## Model description
24
-
25
- More information needed
26
-
27
- ## Intended uses & limitations
28
-
29
- More information needed
30
-
31
- ## Training and evaluation data
32
-
33
- More information needed
34
-
35
- ## Training procedure
36
-
37
- ### Training hyperparameters
38
-
39
- The following hyperparameters were used during training:
40
- - learning_rate: 5e-06
41
- - train_batch_size: 1
42
- - eval_batch_size: 1
43
- - seed: 42
44
- - distributed_type: multi-GPU
45
- - num_devices: 4
46
- - gradient_accumulation_steps: 2
47
- - total_train_batch_size: 8
48
- - total_eval_batch_size: 4
49
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
- - lr_scheduler_type: cosine
51
- - num_epochs: 1
52
-
53
- ### Training results
54
-
55
- | Training Loss | Epoch | Step | Validation Loss |
56
- |:-------------:|:------:|:----:|:---------------:|
57
- | 0.1754 | 0.1025 | 100 | 0.1826 |
58
- | 0.1914 | 0.2049 | 200 | 0.1759 |
59
- | 0.1891 | 0.3074 | 300 | 0.1709 |
60
- | 0.1999 | 0.4098 | 400 | 0.1681 |
61
- | 0.1822 | 0.5123 | 500 | 0.1657 |
62
- | 0.1815 | 0.6148 | 600 | 0.1631 |
63
- | 0.1823 | 0.7172 | 700 | 0.1616 |
64
- | 0.1693 | 0.8197 | 800 | 0.1603 |
65
- | 0.1789 | 0.9221 | 900 | 0.1596 |
66
-
67
-
68
- ### Framework versions
69
-
70
- - Transformers 4.46.1
71
- - Pytorch 2.6.0+cu124
72
- - Datasets 3.1.0
73
- - Tokenizers 0.20.3