idah4 commited on
Commit
304c6dd
1 Parent(s): 79c35c6

Model save

Browse files
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: hyunwoongko/kobart
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: qa_kor_hospital_2
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # qa_kor_hospital_2
15
+
16
+ This model is a fine-tuned version of [hyunwoongko/kobart](https://huggingface.co/hyunwoongko/kobart) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 1.4155
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 1e-05
38
+ - train_batch_size: 16
39
+ - eval_batch_size: 16
40
+ - seed: 42
41
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
+ - lr_scheduler_type: linear
43
+ - lr_scheduler_warmup_steps: 400
44
+ - num_epochs: 3
45
+
46
+ ### Training results
47
+
48
+ | Training Loss | Epoch | Step | Validation Loss |
49
+ |:-------------:|:-----:|:----:|:---------------:|
50
+ | No log | 0.1 | 100 | 3.9336 |
51
+ | No log | 0.2 | 200 | 2.0259 |
52
+ | No log | 0.31 | 300 | 1.7706 |
53
+ | No log | 0.41 | 400 | 1.6725 |
54
+ | 3.5011 | 0.51 | 500 | 1.6036 |
55
+ | 3.5011 | 0.61 | 600 | 1.5641 |
56
+ | 3.5011 | 0.71 | 700 | 1.5375 |
57
+ | 3.5011 | 0.82 | 800 | 1.5182 |
58
+ | 3.5011 | 0.92 | 900 | 1.5092 |
59
+ | 1.5608 | 1.02 | 1000 | 1.4925 |
60
+ | 1.5608 | 1.12 | 1100 | 1.4824 |
61
+ | 1.5608 | 1.22 | 1200 | 1.4736 |
62
+ | 1.5608 | 1.33 | 1300 | 1.4663 |
63
+ | 1.5608 | 1.43 | 1400 | 1.4565 |
64
+ | 1.4641 | 1.53 | 1500 | 1.4531 |
65
+ | 1.4641 | 1.63 | 1600 | 1.4429 |
66
+ | 1.4641 | 1.73 | 1700 | 1.4374 |
67
+ | 1.4641 | 1.84 | 1800 | 1.4337 |
68
+ | 1.4641 | 1.94 | 1900 | 1.4316 |
69
+ | 1.4129 | 2.04 | 2000 | 1.4302 |
70
+ | 1.4129 | 2.14 | 2100 | 1.4281 |
71
+ | 1.4129 | 2.24 | 2200 | 1.4243 |
72
+ | 1.4129 | 2.35 | 2300 | 1.4224 |
73
+ | 1.4129 | 2.45 | 2400 | 1.4214 |
74
+ | 1.3777 | 2.55 | 2500 | 1.4192 |
75
+ | 1.3777 | 2.65 | 2600 | 1.4173 |
76
+ | 1.3777 | 2.76 | 2700 | 1.4174 |
77
+ | 1.3777 | 2.86 | 2800 | 1.4159 |
78
+ | 1.3777 | 2.96 | 2900 | 1.4155 |
79
+
80
+
81
+ ### Framework versions
82
+
83
+ - Transformers 4.38.2
84
+ - Pytorch 2.2.1+cu121
85
+ - Datasets 2.18.0
86
+ - Tokenizers 0.15.2
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "decoder_start_token_id": 1,
5
+ "eos_token_id": 1,
6
+ "forced_eos_token_id": 1,
7
+ "pad_token_id": 3,
8
+ "transformers_version": "4.38.2"
9
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fa433f9001eb079c42cde76dd3649fd17b64c8b567379b8917e490293858548c
3
  size 495589768
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55daaf029b7aec8954745e9e6c796bdeb2d09173371c8540573eea05686fff6e
3
  size 495589768
runs/Apr16_00-32-51_a5ac0e95e35a/events.out.tfevents.1713227596.a5ac0e95e35a.353.1 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5ecd227459972f03aeb5b976d7f6ed7188fa9c15328bb42eb9cbfc0ae881d2b6
3
- size 12842
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4064fa856ef47455a024e66f72f39d458a03cdf003a62a7a90eb79ec6c92154
3
+ size 14762