|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
|
|
# Dataset Name: Dataset for ASR Speaker-Tagging Corrections (Speaker Diarization) |
|
|
|
|
|
## Description |
|
|
|
- This dataset is pairs of erroneous ASR output and speaker tagging, which are generated from a ASR system and speaker diarization system. |
|
Each source erroneous transcription is paired with human-annotated transcription, which has correct transcription and speaker tagging. |
|
- [SEGment-wise Long-form Speech Transcription annotation](#segment-wise-long-form-speech-transcription-annotation-seglst) (`SegLST`), the file format used in the [CHiME challenges](https://www.chimechallenge.org) |
|
|
|
|
|
Example) `session_ge1nse2c.seglst.json` |
|
|
|
``` |
|
[ |
|
... |
|
{ |
|
"session_id": "session_ge1nse2c", |
|
"words": "well that is the problem we have erroneous transcript and speaker tagging we want to correct it using large language models", |
|
"start_time": 181.88, |
|
"end_time": 193.3, |
|
"speaker": "speaker1" |
|
}, |
|
{ |
|
"session_id": "session_ge1nse2c", |
|
"words": "it seems like a really interesting problem I feel that we can start with very simple methods", |
|
"start_time": 194.48, |
|
"end_time": 205.03, |
|
"speaker": "speaker2" |
|
}, |
|
... |
|
] |
|
``` |
|
|
|
## Structure |
|
|
|
### Data Split |
|
|
|
The dataset is divided into training and test splits: |
|
|
|
- Development Data: 142 entries |
|
- 2 to 4 speakers in each session |
|
- Approximately 10 ~ 40 mins of recordings |
|
- Evaluation Data: 104 entries |
|
- 2 to 4 speakers in each session |
|
- Approximately 10 ~ 40 mins of recordings |
|
|
|
### Keys (items) |
|
|
|
- `session_id`: "session_ge1nse2c", |
|
- `words`: Transcription corresponding to the time stamp (start, end). |
|
- `start_time`: Start time in second. |
|
- `end_time`: End time in second. |
|
- `speaker`: Speaker tagging in string "speaker\<N\>" |
|
|
|
### Source Datasets |
|
|
|
`err_source_text`: This is the erroneous ASR-Diarization results to be fixed. Has dev, eval folders |
|
`ref_annotated_text`: This is the human annotated ground-truth for evaluation. Only dev split is included. |
|
|
|
- **Development Sources**: |
|
- `dev`: 142 sessions |
|
|
|
- **Evaluation Sources**: |
|
- `eval`: 104 Sessions |
|
|
|
## Access |
|
|
|
The dataset can be accessed and downloaded through the HuggingFace Datasets library. |
|
|
|
## Evaluation |
|
|
|
This dataset can be evaluated by [MeetEval Software](https://github.com/fgnt/meeteval) |
|
|
|
### From PyPI |
|
``` |
|
pip install meeteval |
|
``` |
|
|
|
### From source |
|
``` |
|
git clone https://github.com/fgnt/meeteval |
|
pip install -e ./meeteval |
|
``` |
|
|
|
### Evaluate the corrected segLST files: |
|
``` |
|
python -m meeteval.wer cpwer -h err_source_text/dev/session_ge1nse2c.json -r ref_annotate_text/dev/session_ge1nse2c.json |
|
``` |
|
Or after installation, you can use the following command alternatively. |
|
``` |
|
meeteval-wer cpwer -h err_source_text/dev/session_ge1nse2c.json -r ref_annotate_text/dev/session_ge1nse2c.json |
|
``` |
|
|
|
### References |
|
|
|
```bib |
|
@inproceedings{park2024enhancing, |
|
title={Enhancing speaker diarization with large language models: A contextual beam search approach}, |
|
author={Park, Tae Jin and Dhawan, Kunal and Koluguri, Nithin and Balam, Jagadeesh}, |
|
booktitle={ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, |
|
pages={10861--10865}, |
|
year={2024}, |
|
organization={IEEE} |
|
} |
|
``` |
|
|
|
```bib |
|
@InProceedings{MeetEval23, |
|
title={MeetEval: A Toolkit for Computation of Word Error Rates for Meeting Transcription Systems}, |
|
author={von Neumann, Thilo and Boeddeker, Christoph and Delcroix, Marc and Haeb-Umbach, Reinhold}, |
|
booktitle={CHiME-2023 Workshop, Dublin, England}, |
|
year={2023} |
|
} |
|
``` |