Lingeshg commited on
Commit
14f7e43
1 Parent(s): b38d00c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +96 -170
README.md CHANGED
@@ -7,197 +7,123 @@ language:
7
  pipeline_tag: voice-activity-detection
8
  base_model: facebook/wav2vec2-base
9
  ---
10
- # Model Card for Model ID
11
 
12
- <!-- Provide a quick summary of what the model is/does. -->
13
 
14
- This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
15
 
16
  ## Model Details
17
 
18
- ### Model Description
 
 
 
 
19
 
20
- <!-- Provide a longer summary of what this model is. -->
21
 
22
-
23
-
24
- - **Developed by:** [More Information Needed]
25
- - **Funded by [optional]:** [More Information Needed]
26
- - **Shared by [optional]:** [More Information Needed]
27
- - **Model type:** [More Information Needed]
28
- - **Language(s) (NLP):** [More Information Needed]
29
- - **License:** [More Information Needed]
30
- - **Finetuned from model [optional]:** [More Information Needed]
31
-
32
- ### Model Sources [optional]
33
-
34
- <!-- Provide the basic links for the model. -->
35
-
36
- - **Repository:** [More Information Needed]
37
- - **Paper [optional]:** [More Information Needed]
38
- - **Demo [optional]:** [More Information Needed]
39
 
40
  ## Uses
41
 
42
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
-
44
  ### Direct Use
45
 
46
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
-
48
- [More Information Needed]
49
-
50
- ### Downstream Use [optional]
51
-
52
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
53
-
54
- [More Information Needed]
55
 
56
  ### Out-of-Scope Use
57
 
58
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
59
 
60
- [More Information Needed]
61
 
62
- ## Bias, Risks, and Limitations
63
-
64
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
 
66
- [More Information Needed]
 
 
 
67
 
68
- ### Recommendations
69
 
70
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
71
 
72
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
73
 
74
  ## How to Get Started with the Model
75
 
76
- Use the code below to get started with the model.
77
-
78
- [More Information Needed]
79
-
80
- ## Training Details
81
-
82
- ### Training Data
83
-
84
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
85
-
86
- [More Information Needed]
87
-
88
- ### Training Procedure
89
-
90
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
91
-
92
- #### Preprocessing [optional]
93
-
94
- [More Information Needed]
95
-
96
-
97
- #### Training Hyperparameters
98
-
99
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
100
-
101
- #### Speeds, Sizes, Times [optional]
102
-
103
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
104
-
105
- [More Information Needed]
106
-
107
- ## Evaluation
108
-
109
- <!-- This section describes the evaluation protocols and provides the results. -->
110
-
111
- ### Testing Data, Factors & Metrics
112
-
113
- #### Testing Data
114
-
115
- <!-- This should link to a Dataset Card if possible. -->
116
-
117
- [More Information Needed]
118
-
119
- #### Factors
120
-
121
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
122
-
123
- [More Information Needed]
124
-
125
- #### Metrics
126
-
127
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
128
-
129
- [More Information Needed]
130
-
131
- ### Results
132
-
133
- [More Information Needed]
134
-
135
- #### Summary
136
-
137
-
138
-
139
- ## Model Examination [optional]
140
-
141
- <!-- Relevant interpretability work for the model goes here -->
142
-
143
- [More Information Needed]
144
-
145
- ## Environmental Impact
146
-
147
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
148
-
149
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
150
-
151
- - **Hardware Type:** [More Information Needed]
152
- - **Hours used:** [More Information Needed]
153
- - **Cloud Provider:** [More Information Needed]
154
- - **Compute Region:** [More Information Needed]
155
- - **Carbon Emitted:** [More Information Needed]
156
-
157
- ## Technical Specifications [optional]
158
-
159
- ### Model Architecture and Objective
160
-
161
- [More Information Needed]
162
-
163
- ### Compute Infrastructure
164
-
165
- [More Information Needed]
166
-
167
- #### Hardware
168
-
169
- [More Information Needed]
170
-
171
- #### Software
172
-
173
- [More Information Needed]
174
-
175
- ## Citation [optional]
176
-
177
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
178
-
179
- **BibTeX:**
180
-
181
- [More Information Needed]
182
-
183
- **APA:**
184
-
185
- [More Information Needed]
186
-
187
- ## Glossary [optional]
188
-
189
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
190
-
191
- [More Information Needed]
192
-
193
- ## More Information [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Authors [optional]
198
-
199
- [More Information Needed]
200
-
201
- ## Model Card Contact
202
-
203
- [More Information Needed]
 
7
  pipeline_tag: voice-activity-detection
8
  base_model: facebook/wav2vec2-base
9
  ---
 
10
 
11
+ # Model Card for Emotion Classification from Voice
12
 
13
+ This model performs emotion classification from voice data using fine-tuned `Wav2Vec2Model` from Facebook. The model predicts one of seven emotion labels: Angry, Disgust, Fear, Happy, Neutral, Sad, and Surprise.
14
 
15
  ## Model Details
16
 
17
+ - **Developed by:** [Your Name/Organization]
18
+ - **Model type:** Fine-tuned Wav2Vec2Model
19
+ - **Language(s):** English (en), Tamil (ta), French (fr), Malayalam (ml)
20
+ - **License:** [Choose a license]
21
+ - **Finetuned from model:** [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base)
22
 
23
+ ### Model Sources
24
 
25
+ - **Repository:** [Link to your repository]
26
+ - **Demo:** [Gradio Demo Link if Available]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
  ## Uses
29
 
 
 
30
  ### Direct Use
31
 
32
+ This model can be directly used for emotion detection in speech audio files, which can have applications in call centers, virtual assistants, and mental health monitoring.
 
 
 
 
 
 
 
 
33
 
34
  ### Out-of-Scope Use
35
 
36
+ The model is not intended for general speech recognition or other NLP tasks outside emotion classification.
37
 
38
+ ## Datasets Used
39
 
40
+ The model has been trained on a combination of the following datasets:
 
 
41
 
42
+ **CREMA-D:** 7,442 clips of actors speaking with various emotions
43
+ **Torrento:** Emotional speech in Spanish, captured from various environments
44
+ **RAVDESS:** 24 professional actors, 7 emotions
45
+ **Emo-DB:** 535 utterances, covering 7 emotions
46
 
47
+ The combination of these datasets allows the model to generalize across multiple languages and accents.
48
 
49
+ ## Bias, Risks, and Limitations
50
 
51
+ - **Bias:** The model might underperform on speech data with accents or languages not present in the training data.
52
+ - **Limitations:** The model is trained specifically for emotion detection and might not generalize well for other speech tasks.
53
 
54
  ## How to Get Started with the Model
55
 
56
+ ```python
57
+ import torch
58
+ import numpy as np
59
+ from transformers import Wav2Vec2Model
60
+ from torchaudio.transforms import Resample
61
+
62
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
63
+ wav2vec2_model = Wav2Vec2Model.from_pretrained("facebook/wav2vec2-base", output_hidden_states=True).to(device)
64
+
65
+ class FineTunedWav2Vec2Model(torch.nn.Module):
66
+ def __init__(self, wav2vec2_model, output_size):
67
+ super(FineTunedWav2Vec2Model, self).__init__()
68
+ self.wav2vec2 = wav2vec2_model
69
+ self.fc = torch.nn.Linear(self.wav2vec2.config.hidden_size, output_size)
70
+
71
+ def forward(self, x):
72
+ self.wav2vec2 = self.wav2vec2.double()
73
+ self.fc = self.fc.double()
74
+ outputs = self.wav2vec2(x.double())
75
+ out = outputs.hidden_states[-1]
76
+ out = self.fc(out[:, 0, :])
77
+ return out
78
+
79
+ def preprocess_audio(audio):
80
+ sample_rate, waveform = audio
81
+ if isinstance(waveform, np.ndarray):
82
+ waveform = torch.from_numpy(waveform)
83
+ if waveform.dim() == 2:
84
+ waveform = waveform.mean(dim=0)
85
+
86
+ # Normalize audio
87
+ if waveform.dtype != torch.float32:
88
+ waveform = waveform.float() / torch.iinfo(waveform.dtype).max
89
+
90
+ # Resample to 16kHz
91
+ if sample_rate != 16000:
92
+ resampler = Resample(orig_freq=sample_rate, new_freq=16000)
93
+ waveform = resampler(waveform)
94
+ return waveform
95
+
96
+ def predict(audio):
97
+ model_path = "model.pth" # Path to your fine-tuned model
98
+ model = FineTunedWav2Vec2Model(wav2vec2_model, 7).to(device)
99
+ model.load_state_dict(torch.load(model_path, map_location=device))
100
+ model.eval()
101
+
102
+ waveform = preprocess_audio(audio)
103
+ waveform = waveform.unsqueeze(0).to(device)
104
+
105
+ with torch.no_grad():
106
+ output = model(waveform)
107
+
108
+ predicted_label = torch.argmax(output, dim=1).item()
109
+ emotion_labels = ["Angry", "Disgust", "Fear", "Happy", "Neutral", "Sad", "Surprise"]
110
+ return emotion_labels[predicted_label]
111
+
112
+ # Example usage
113
+ audio_data = (sample_rate, waveform) # Replace with your actual audio data
114
+ emotion = predict(audio_data)
115
+ print(f"Predicted Emotion: {emotion}")
116
+
117
+ ## Training Procedure
118
+
119
+ Preprocessing: Resampled all audio to 16kHz.
120
+ Training: Fine-tuned facebook/wav2vec2-base with emotion labels.
121
+ Hyperparameters: Batch size: 16, Learning rate: 5e-5, Epochs: 5
122
+
123
+ ##Evaluation
124
+ Testing Data
125
+ Evaluation was performed on a held-out test set from the CREMA-D and RAVDESS datasets.
126
+
127
+ ##Metrics
128
+ Accuracy: 85%
129
+ F1-score: 82% (weighted average across all classes)