jackoyoungblood commited on
Commit
1d9adf4
1 Parent(s): 7d073ed

End of training

Browse files
Files changed (1) hide show
  1. README.md +16 -33
README.md CHANGED
@@ -5,24 +5,9 @@ tags:
5
  - generated_from_trainer
6
  datasets:
7
  - marsyas/gtzan
8
- metrics:
9
- - accuracy
10
  model-index:
11
  - name: distilhubert-finetuned-gtzan
12
- results:
13
- - task:
14
- name: Audio Classification
15
- type: audio-classification
16
- dataset:
17
- name: GTZAN
18
- type: marsyas/gtzan
19
- config: all
20
- split: train
21
- args: all
22
- metrics:
23
- - name: Accuracy
24
- type: accuracy
25
- value: 0.85
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,8 +17,7 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.6638
36
- - Accuracy: 0.85
37
 
38
  ## Model description
39
 
@@ -52,29 +36,28 @@ More information needed
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
- - learning_rate: 5e-05
56
- - train_batch_size: 8
57
  - eval_batch_size: 8
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
  - lr_scheduler_warmup_ratio: 0.1
62
- - num_epochs: 10
63
 
64
  ### Training results
65
 
66
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
67
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
- | 2.0133 | 1.0 | 113 | 1.8013 | 0.49 |
69
- | 1.3855 | 2.0 | 226 | 1.2793 | 0.68 |
70
- | 0.9643 | 3.0 | 339 | 0.9452 | 0.74 |
71
- | 0.8143 | 4.0 | 452 | 0.8508 | 0.73 |
72
- | 0.613 | 5.0 | 565 | 0.7258 | 0.75 |
73
- | 0.3513 | 6.0 | 678 | 0.7811 | 0.76 |
74
- | 0.409 | 7.0 | 791 | 0.6630 | 0.81 |
75
- | 0.1786 | 8.0 | 904 | 0.6641 | 0.84 |
76
- | 0.2988 | 9.0 | 1017 | 0.6572 | 0.83 |
77
- | 0.1114 | 10.0 | 1130 | 0.6638 | 0.85 |
78
 
79
 
80
  ### Framework versions
 
5
  - generated_from_trainer
6
  datasets:
7
  - marsyas/gtzan
 
 
8
  model-index:
9
  - name: distilhubert-finetuned-gtzan
10
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
 
18
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.4454
 
21
 
22
  ## Model description
23
 
 
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
+ - learning_rate: 9.349509030319398e-05
40
+ - train_batch_size: 12
41
  - eval_batch_size: 8
42
  - seed: 42
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_ratio: 0.1
46
+ - num_epochs: 9
47
 
48
  ### Training results
49
 
50
+ | Training Loss | Epoch | Step | Validation Loss |
51
+ |:-------------:|:-----:|:----:|:---------------:|
52
+ | 1.904 | 1.0 | 75 | 1.7595 |
53
+ | 1.2214 | 2.0 | 150 | 1.1147 |
54
+ | 0.862 | 3.0 | 225 | 0.7765 |
55
+ | 0.6679 | 4.0 | 300 | 0.6600 |
56
+ | 0.4188 | 5.0 | 375 | 0.4797 |
57
+ | 0.3369 | 6.0 | 450 | 0.5607 |
58
+ | 0.1591 | 7.0 | 525 | 0.4668 |
59
+ | 0.0591 | 8.0 | 600 | 0.4493 |
60
+ | 0.0718 | 9.0 | 675 | 0.4454 |
 
61
 
62
 
63
  ### Framework versions