Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,14 @@ should probably proofread and complete it, then remove this comment. -->
|
|
15 |
|
16 |
# Wav2Vec2_Fine_tuned_on_RAVDESS_2_Speech_Emotion_Recognition
|
17 |
|
18 |
-
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
It achieves the following results on the evaluation set:
|
20 |
- Loss: 0.5638
|
21 |
- Accuracy: 0.8125
|
|
|
15 |
|
16 |
# Wav2Vec2_Fine_tuned_on_RAVDESS_2_Speech_Emotion_Recognition
|
17 |
|
18 |
+
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english).
|
19 |
+
|
20 |
+
The dataset used to fine-tune the original pre-trained model is the [RAVDESS dataset](https://paperswithcode.com/dataset/ravdess).
|
21 |
+
This dataset provides 7442 samples of recordings from actors performing on 6 different emotions in English, which are:
|
22 |
+
|
23 |
+
```python
|
24 |
+
emotions = ['angry', 'calm', 'disgust', 'fearful', 'happy', 'neutral', 'sad', 'surprised']
|
25 |
+
```
|
26 |
It achieves the following results on the evaluation set:
|
27 |
- Loss: 0.5638
|
28 |
- Accuracy: 0.8125
|