jackoyoungblood commited on
Commit
82a6425
·
1 Parent(s): a8a6250

Training in progress, epoch 1

Browse files
Files changed (4) hide show
  1. README.md +7 -7
  2. config.json +1 -1
  3. pytorch_model.bin +1 -1
  4. training_args.bin +1 -1
README.md CHANGED
@@ -22,7 +22,7 @@ model-index:
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.74
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,8 +32,8 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 1.0103
36
- - Accuracy: 0.74
37
 
38
  ## Model description
39
 
@@ -52,7 +52,7 @@ More information needed
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
- - learning_rate: 0.000120184954960149
56
  - train_batch_size: 8
57
  - eval_batch_size: 8
58
  - seed: 42
@@ -65,13 +65,13 @@ The following hyperparameters were used during training:
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
67
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
- | 1.5086 | 1.0 | 113 | 1.3371 | 0.52 |
69
- | 1.0486 | 2.0 | 226 | 1.0103 | 0.74 |
70
 
71
 
72
  ### Framework versions
73
 
74
  - Transformers 4.32.1
75
- - Pytorch 2.0.1+cu118
76
  - Datasets 2.14.4
77
  - Tokenizers 0.13.3
 
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.71
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.9296
36
+ - Accuracy: 0.71
37
 
38
  ## Model description
39
 
 
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
+ - learning_rate: 0.0001824504502691984
56
  - train_batch_size: 8
57
  - eval_batch_size: 8
58
  - seed: 42
 
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
67
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
+ | 1.424 | 1.0 | 113 | 1.1991 | 0.53 |
69
+ | 0.9403 | 2.0 | 226 | 0.9296 | 0.71 |
70
 
71
 
72
  ### Framework versions
73
 
74
  - Transformers 4.32.1
75
+ - Pytorch 1.13.1
76
  - Datasets 2.14.4
77
  - Tokenizers 0.13.3
config.json CHANGED
@@ -39,7 +39,7 @@
39
  "ctc_loss_reduction": "sum",
40
  "ctc_zero_infinity": false,
41
  "do_stable_layer_norm": false,
42
- "dropout": 0.07647722007522062,
43
  "eos_token_id": 2,
44
  "feat_extract_activation": "gelu",
45
  "feat_extract_norm": "group",
 
39
  "ctc_loss_reduction": "sum",
40
  "ctc_zero_infinity": false,
41
  "do_stable_layer_norm": false,
42
+ "dropout": 0.09098767854928308,
43
  "eos_token_id": 2,
44
  "feat_extract_activation": "gelu",
45
  "feat_extract_norm": "group",
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2c230e8148055d0d32afa2cdbe91c385ff3162573701b932d28af0c5d1ce096a
3
  size 94783376
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c75ff2f3bacbeb0b015c7ee9fad54a8c35478e7b6eea8a7389c4ce43e4d2b54
3
  size 94783376
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:70fba0c25c74d457c055baa6d457a795267604ef6c9558b0822c26a0c759ce9d
3
  size 4091
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15c46fd79dddb31b8927e28f097bfe16f10e0bf27a40c7b7a1bd41730320532f
3
  size 4091