## Dataset Name: ASR transcripts of IEMOCAP for ASR error correction and emotion recognition ## Description This dataset consists of ASR transcripts of 11 speech models, following the turns of the conversation in IEMOCAP, with corresponding speaker ID and utterance ID. To acquire this dataset, please obtain the license of IEMOCAP first (if you already have it, please skip step 1). Specifically: 1. Submit a request to SAIL lab at USC following their guidance: [link of their webpage](https://sail.usc.edu/iemocap/iemocap_release.htm). All you have to do is read their license and fill out a Google form, which is pretty easy. 2. When registering this challenge, attach the approved license or screenshot of the approval email as proof. We will then release the data to you. The explanation for each key is as follows: - `need_prediction`: this key indicates whether this utterance should be included in the prediction procedure. "yes" denotes the utterances labeled with Big4 emotions, which are widely used for emotion recognition in IEMOCAP. "no" denotes all other utterances. Note that we have removed the utterances that have no human annotations. - `emotion`: this key indicates the emotion label of the utterance. - `id`: this key indicates the utterance ID, which is also the name of the audio file in IEMOCAP corpus. The ID is exactly the same as the raw ID in IEMOCAP. - `speaker`: this key indicates the speaker of the utterance. Since there are two speakers in each session, there are ten speakers in total. It's important to note that the sixth character of the id DOES NOT represent the gender of the speaker, but rather the gender of the person currently wearing the motion capture device. Please use our provided speaker as the speaker ID. - `groundtruth`: this key indicates the original human transcription provided by IEMOCAP. The remaining ten keys indicate the ASR transcription generated by respective ASR model. ## Access The dataset will be shared to you after you have registered. ## Acknowledgments This dataset is created based on IEMOCAP. Thanks to the original authors of IEMOCAP and appreciate the approval of Prof. Shrikanth Narayanan. ## References ``` @article{busso2008iemocap, title={IEMOCAP: Interactive emotional dyadic motion capture database}, author={Busso, Carlos and Bulut, Murtaza and Lee, Chi-Chun and Kazemzadeh, Abe and Mower, Emily and Kim, Samuel and Chang, Jeannette N and Lee, Sungbok and Narayanan, Shrikanth S}, journal={Language resources and evaluation}, volume={42}, pages={335--359}, year={2008}, publisher={Springer} } @article{li2024speech, title={Speech Emotion Recognition with ASR Transcripts: A Comprehensive Study on Word Error Rate and Fusion Techniques}, author={Li, Yuanchao and Bell, Peter and Lai, Catherine}, journal={arXiv preprint arXiv:2406.08353}, year={2024} } ```