SarwarShafee's picture
Update README.md
5b664db
|
raw
history blame
2.49 kB
metadata
language:
  - bn
license: cc-by-nc-4.0
task_categories:
  - automatic-speech-recognition
dataset_info:
  features:
    - name: audio
      dtype: audio
    - name: text
      dtype: string
    - name: duration
      dtype: float64
    - name: category
      dtype: string
    - name: source
      dtype: string
  splits:
    - name: train
      num_bytes: 219091915.875
      num_examples: 1753
  download_size: 214321460
  dataset_size: 219091915.875
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

MegaBNSpeech Test Data

To evaluate the performance of the models, we used four test sets. Two of these were developed as part of the MegaBNSpeech corpus, while the remaining two (Fleurs and Common Voice) are commonly used test sets that are widely recognized by the speech community.

Use dataset library:

from datasets import load_dataset
dataset = load_dataset("hishab/MegaBNSpeech_Test_Data")

Reported Word error rate (WER) /character error rate (CER) on four test sets using four ASR systems

Category Duration (hr) MegaBNSpeech Google MMS OOD-speech
MegaBNSpeech-YT 8.1 6.4/3.39 28.3/18.88 51.1/23.49
MegaBNSpeech-Tel 1.9 ∗40.7/24.38 ∗59/41.26 ∗76.8/39.36
∗69.9/52.93

Reported Word error rate (WER) /character error rate (CER) on different categories present in Hishab BN FastConformer

Category Duration (hr) Hishab BN FastConformer Google MMS OOD-speech
News 1.21 2.5/1.21 18.9/10.46 52.2/21.65
Talkshow 1.39 6/3.29 28/18.71 48.8/21.5
Courses 3.81 6.8/3.79 30.8/21.64 50.2/23.52
Drama 0.03 10.3/7.47 37.3/27.43 64.3/32.74
Science 0.26 5/1.92 20.6/11.4 45.3/19.93
Vlog 0.18 11.3/6.69 33/22.9 57.9/27.18
Recipie 0.58 7.5/3.29 26.4/16.6 53.3/26.89
Waz 0.49 9.6/5.45 33.3/23.1 57.3/27.46
Movie 0.1 8/4.64 35.2/23.88 64.4/34.96