Julien Simon
commited on
Commit
·
9ceb12a
1
Parent(s):
2470e8c
update model card README.md
Browse files
README.md
CHANGED
@@ -19,11 +19,11 @@ should probably proofread and complete it, then remove this comment. -->
|
|
19 |
|
20 |
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
|
21 |
It achieves the following results on the evaluation set:
|
22 |
-
- Loss: 0.
|
23 |
-
- Accuracy: 0.
|
24 |
-
- F1: [0.
|
25 |
-
- Precision: [0.
|
26 |
-
- Recall: [0.
|
27 |
|
28 |
## Model description
|
29 |
|
@@ -44,23 +44,19 @@ More information needed
|
|
44 |
The following hyperparameters were used during training:
|
45 |
- learning_rate: 5e-05
|
46 |
- train_batch_size: 32
|
47 |
-
- eval_batch_size:
|
48 |
- seed: 42
|
49 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
50 |
- lr_scheduler_type: linear
|
51 |
- num_epochs: 1
|
52 |
-
- mixed_precision_training: Native AMP
|
53 |
|
54 |
### Training results
|
55 |
|
56 |
-
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|
57 |
-
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------------------------------------------------:|:--------------------------------------------------------:|:--------------------------------------------------------:|
|
58 |
-
| 0.9618 | 1.0 | 2813 | 0.9526 | 0.5793 | [0.63065766 0.46287992 0.50875894 0.55936944 0.73581605] | [0.62955567 0.46589769 0.49282983 0.58949625 0.7198044 ] | [0.63176353 0.45990099 0.52575217 0.53217223 0.75255624] |
|
59 |
|
60 |
|
61 |
### Framework versions
|
62 |
|
63 |
-
- Transformers 4.
|
64 |
-
- Pytorch
|
65 |
- Datasets 2.12.0
|
66 |
- Tokenizers 0.13.3
|
|
|
19 |
|
20 |
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
|
21 |
It achieves the following results on the evaluation set:
|
22 |
+
- Loss: 0.9524
|
23 |
+
- Accuracy: 0.579
|
24 |
+
- F1: [0.62880121 0.47009599 0.50419753 0.55847134 0.73663068]
|
25 |
+
- Precision: [0.63086233 0.46744983 0.4887506 0.58988159 0.72372965]
|
26 |
+
- Recall: [0.62675351 0.47277228 0.52065273 0.53023706 0.75 ]
|
27 |
|
28 |
## Model description
|
29 |
|
|
|
44 |
The following hyperparameters were used during training:
|
45 |
- learning_rate: 5e-05
|
46 |
- train_batch_size: 32
|
47 |
+
- eval_batch_size: 8
|
48 |
- seed: 42
|
49 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
50 |
- lr_scheduler_type: linear
|
51 |
- num_epochs: 1
|
|
|
52 |
|
53 |
### Training results
|
54 |
|
|
|
|
|
|
|
55 |
|
56 |
|
57 |
### Framework versions
|
58 |
|
59 |
+
- Transformers 4.30.2
|
60 |
+
- Pytorch 2.0.1+cu117
|
61 |
- Datasets 2.12.0
|
62 |
- Tokenizers 0.13.3
|