bxleigh's picture
End of training
b8996c6
|
raw
history blame
4.94 kB
metadata
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
  - generated_from_trainer
datasets:
  - marsyas/gtzan
metrics:
  - accuracy
model-index:
  - name: distilhubert-finetuned-gtzan
    results:
      - task:
          name: Audio Classification
          type: audio-classification
        dataset:
          name: GTZAN
          type: marsyas/gtzan
          config: all
          split: train
          args: all
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.85

distilhubert-finetuned-gtzan

This model is a fine-tuned version of ntu-spml/distilhubert on the GTZAN dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5569
  • Accuracy: 0.85

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 8e-05
  • train_batch_size: 6
  • eval_batch_size: 6
  • seed: 42
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 192
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 60

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.85 4 2.2836 0.14
2.2984 1.92 9 2.2574 0.18
2.2856 2.99 14 2.2060 0.32
2.2478 3.84 18 2.1331 0.37
2.1775 4.91 23 1.9859 0.47
2.0557 5.97 28 1.8086 0.52
1.8764 6.83 32 1.6783 0.53
1.7133 7.89 37 1.5235 0.54
1.5661 8.96 42 1.4048 0.58
1.4544 9.81 46 1.3279 0.6
1.3365 10.88 51 1.2591 0.67
1.2228 11.95 56 1.1587 0.7
1.1298 12.8 60 1.1476 0.68
1.0601 13.87 65 1.0066 0.77
0.9886 14.93 70 0.9855 0.76
0.923 16.0 75 0.9767 0.73
0.923 16.85 79 0.8896 0.79
0.8539 17.92 84 0.8421 0.78
0.788 18.99 89 0.8270 0.8
0.7253 19.84 93 0.7764 0.82
0.6523 20.91 98 0.6998 0.85
0.5853 21.97 103 0.6891 0.87
0.5372 22.83 107 0.7106 0.8
0.4815 23.89 112 0.6542 0.82
0.4461 24.96 117 0.6136 0.87
0.3841 25.81 121 0.6338 0.81
0.3505 26.88 126 0.6082 0.87
0.3143 27.95 131 0.5776 0.88
0.2913 28.8 135 0.5833 0.86
0.2519 29.87 140 0.5543 0.89
0.2234 30.93 145 0.5606 0.84
0.1994 32.0 150 0.5726 0.86
0.1994 32.85 154 0.5391 0.86
0.1789 33.92 159 0.5908 0.83
0.1615 34.99 164 0.5498 0.85
0.1444 35.84 168 0.5389 0.85
0.1303 36.91 173 0.5829 0.84
0.1192 37.97 178 0.5278 0.87
0.1074 38.83 182 0.6011 0.83
0.1001 39.89 187 0.5260 0.87
0.0935 40.96 192 0.5778 0.84
0.0885 41.81 196 0.5563 0.86
0.0827 42.88 201 0.5556 0.86
0.0785 43.95 206 0.5807 0.84
0.0767 44.8 210 0.5649 0.85
0.0722 45.87 215 0.5551 0.85
0.0718 46.93 220 0.5432 0.86
0.0701 48.0 225 0.5720 0.85
0.0701 48.85 229 0.5695 0.85
0.068 49.92 234 0.5642 0.85
0.0673 50.99 239 0.5571 0.85
0.0672 51.2 240 0.5569 0.85

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 1.13.1+cu117
  • Datasets 2.14.4
  • Tokenizers 0.13.3