urbija commited on
Commit
46c23ad
1 Parent(s): 4158b01

Training complete

Browse files
README.md CHANGED
@@ -2,6 +2,11 @@
2
  base_model: dmis-lab/biobert-v1.1
3
  tags:
4
  - generated_from_trainer
 
 
 
 
 
5
  model-index:
6
  - name: cer_model
7
  results: []
@@ -14,16 +19,11 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
- - eval_loss: 0.0011
18
- - eval_precision: 0.0
19
- - eval_recall: 0.0
20
- - eval_f1: 0.0
21
- - eval_accuracy: 0.9999
22
- - eval_runtime: 169.5558
23
- - eval_samples_per_second: 1.103
24
- - eval_steps_per_second: 0.142
25
- - epoch: 2.0
26
- - step: 282
27
 
28
  ## Model description
29
 
@@ -43,16 +43,25 @@ More information needed
43
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 2e-05
46
- - train_batch_size: 8
47
- - eval_batch_size: 8
48
  - seed: 42
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
  - num_epochs: 3
52
 
 
 
 
 
 
 
 
 
 
53
  ### Framework versions
54
 
55
  - Transformers 4.37.0
56
- - Pytorch 2.1.2+cpu
57
  - Datasets 2.1.0
58
  - Tokenizers 0.15.1
 
2
  base_model: dmis-lab/biobert-v1.1
3
  tags:
4
  - generated_from_trainer
5
+ metrics:
6
+ - precision
7
+ - recall
8
+ - f1
9
+ - accuracy
10
  model-index:
11
  - name: cer_model
12
  results: []
 
19
 
20
  This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.0008
23
+ - Precision: 0.0
24
+ - Recall: 0.0
25
+ - F1: 0.0
26
+ - Accuracy: 0.9999
 
 
 
 
 
27
 
28
  ## Model description
29
 
 
43
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 2e-05
46
+ - train_batch_size: 16
47
+ - eval_batch_size: 16
48
  - seed: 42
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
  - num_epochs: 3
52
 
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
56
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
57
+ | 0.0006 | 1.0 | 71 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.9999 |
58
+ | 0.0007 | 2.0 | 142 | 0.0009 | 0.0 | 0.0 | 0.0 | 0.9999 |
59
+ | 0.0003 | 3.0 | 213 | 0.0008 | 0.0 | 0.0 | 0.0 | 0.9999 |
60
+
61
+
62
  ### Framework versions
63
 
64
  - Transformers 4.37.0
65
+ - Pytorch 2.1.2
66
  - Datasets 2.1.0
67
  - Tokenizers 0.15.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:44488c7b23edfbfe4d9844be27dd3661681fda7839c8cc68b11a163b32c99d43
3
  size 430911284
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d1f21c32f45bd3e689e7de130caf22bcd36a912687580b8025507d7dda14543
3
  size 430911284
runs/Feb12_08-25-17_df2f9c3eb12e/events.out.tfevents.1707726365.df2f9c3eb12e.34.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b4b626baf3bf014ef0d734520a6762939a713d0d4a0e47f33677fc727c1a6a4
3
+ size 27204
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5f8f0a2b2a58d0436fc256efc55ac871afa440aab19fc3c00b7c0a884f4e59b5
3
  size 4728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ee99234d04e8fea5ae1b43a4ce921f24947329d3b917f6abf033b3dd7400b54
3
  size 4728