BirdSet / README.md
lrauch's picture
Update README.md
2975c9b verified
|
raw
history blame
9.31 kB
metadata
task_categories:
  - audio-classification
license: cc
tags:
  - bird classification
  - passive acoustic monitoring

Dataset Description

Datasets

We present the BirdSet benchmark that covers a comprehensive range of classification datasets in avian bioacoustics. We offer a static set of evaluation datasets and a varied collection of training datasets, enabling the application of diverse methodologies.

train test test_5s size (GB) #classes
PER (Amazon Basin) 16,802 14,798 15,120 10.5 132
NES (Colombia Costa Rica) 16,117 6,952 24,480 14.2 89
UHH (Hawaiian Islands) 3,626 59,583 36,637 4.92 25 tr, 27 te
HSN (high_sierras) 5,460 10,296 12,000 5.92 21
NBP (NIPS4BPlus) 24,327 5,493 563 29.9 51
POW (Powdermill Nature) 14,911 16,052 4,560 15.7 48
SSW (Sapsucker Woods) 28,403 50,760 205,200 35.2 81
SNE (Sierra Nevada) 19,390 20,147 23,756 20.8 56
XCM (Xenocanto Subset M) 89,798 x x 89.3 409
XCL(Xenocanto Complete) 528,434 x x 484 9,734
  • We assemble a training dataset for each test dataset that is a subset of a complete XC snapshot. We extract all recordings that have vocalizations of the bird species appearing in the test dataset.
  • We use the .ogg format for every recording and a sampling rate of 32 kHz.
  • Each sample in the training dataset is a recording may have more than one vocalization of the corresponding bird species.
  • Each recording in the training datasets has a unique recordist and the corresponding license from XC. We omit all recordings from XC that are CC-ND.
  • The bird species are translated to ebird_codes
  • Snapshot date of XC: 03/10/2024
Train
  • Exclusively using focal audio data from Xeno-Canto (XC) with quality ratings A, B, C and excluding all recordings that are CC-ND.
  • Each dataset is tailored for specific target species identified in the corresponding test soundscape files.
  • We transform the scientific names of the birds into the corresponding ebird_code label.
  • We offer detected events and corresponding cluster assignments to identify bird sounds in each recording.
  • We provide the full recordings from XC. These can generate multiple samples from a single instance.
Test_5s
  • Task: Multilabel ("ebird_code_multilabel")
  • Only soundscape data from Zenodo formatted acoording to the Kaggle evaluation scheme.
  • Each recording is segmented into 5-second intervals without overlaps.
  • This contains segments without any labels which results in a [0] vector.
Test
  • Task: Multiclass ("ebird_code")
  • Only soundscape data sourced from Zenodo.
  • We provide the full recording with the complete label set and specified bounding boxes.
  • This dataset excludes recordings that do not contain bird calls ("no_call").

Metadata

| | format: datasets. | description | |------------------------|---------:----------------------------------------------|-------------------------:| | audio | Audio(sampling_rate=32_000, mono=True, decode=True) | xxxxxx |
| filepath | Value("string") | xxxxxx |
| start_time | Value("float64") | xxxxxx | | end_time | Value("float64") | xxxxxx | | low_freq | Value("int64") | xxxxxx |
| high_freq | Value("int64") | xxxxxx |
| ebird_code | ClassLabel(names=class_list) | xxxxxx |
| ebird_code_multilabel | Sequence(datasets.ClassLabel(names=class_list)) | x |
| call_type | Sequence(datasets.Value("string")) | x |
| sex | Value("string") | x |
| lat | Value("float64") | x |
| long | Value("float64") | x |
| length | Value("int64") | x |
| microphone | Value("string") | x |
| license | Value("string") | x |
| source | Value("string") | x |
| local_time | Value("string") | x |
| detected_events | Sequence(datasets.Sequence(datasets.Value("float64")))| x |
| event_cluster | Sequence(datasets.Value("int64")) | x |
| peaks | Sequence(datasets.Value("float64")) | x |
| quality | Value("string") | x |
| recordist | Value("string") | x |

##### Example Metadata Train

```python
EXAMPLE TRAIN
{'audio': {'path': '.ogg',
  'array': array([ 0.0008485 ,  0.00128899, -0.00317163, ...,  0.00228528,
          0.00270796, -0.00120562]),
  'sampling_rate': 32000},
 'filepath': '.ogg',
 'start_time': None,
 'end_time': None,
 'low_freq': None,
 'high_freq': None,
 'ebird_code': 0,
 'ebird_code_multilabel': [0],
 'ebird_code_secondary': ['plaant1', 'blfnun1', 'butwoo1', 'whtdov', 'undtin1', 'gryhaw3'],
 'call_type': 'song',
 'sex': 'uncertain',
 'lat': -16.0538,
 'long': -49.604,
 'length': 46,
 'microphone': 'focal',
 'license': '//creativecommons.org/licenses/by-nc-sa/4.0/',
 'source': 'xenocanto',
 'local_time': '18:37',
 'detected_events': [[0.736, 1.824],
  [9.936, 10.944],
  [13.872, 15.552],
  [19.552, 20.752],
  [24.816, 25.968],
  [26.528, 32.16],
  [36.112, 37.808],
  [37.792, 38.88],
  [40.048, 40.8],
  [44.432, 45.616]],
 'event_cluster': [0, 0, 0, 0, 0, -1, 0, 0, -1, 0],
 'peaks': [14.76479119037789, 41.16993396760847],
 'quality': 'A',
 'recordist': '...'}

##### Example Metadata Test5s

{'audio': {'path': '.ogg',
  'array': array([-0.67190468, -0.9638235 , -0.99569213, ..., -0.01262935,
         -0.01533066, -0.0141047 ]),
  'sampling_rate': 32000},
 'filepath': '.ogg',
 'start_time': 0.0,
 'end_time': 5.0,
 'low_freq': 0,
 'high_freq': 3098,
 'ebird_code': None,
 'ebird_code_multilabel': [1, 10],
 'ebird_code_secondary': None,
 'call_type': None,
 'sex': None,
 'lat': 5.59,
 'long': -75.85,
 'length': None,
 'microphone': 'Soundscape',
 'license': 'Creative Commons Attribution 4.0 International Public License',
 'source': 'https://zenodo.org/record/7525349',
 'local_time': '4:30:29',
 'detected_events': None,
 'event_cluster': None,
 'peaks': None,
 'quality': None,
 'recordist': None}

Citation Information

@article{gadme,
  author    = {Rauch, Lukas and
               Schwinger, Raphael and
               Wirth, Moritz and
               Heinrich, René and
               Lange, Jonas and
               Kahl, Stefan and
               Sick, Bernhard and
               Tomforde, Sven and
               Scholz, Christoph},
  title     = {GADME: A Benchmark Towards General Avian Diversity Monitoring Evaluation in Deep Bioacoustics,
  journal   = {CoRR},
  volume    = {X},
  year      = {2024},
  url       = {X},
  archivePrefix = {arXiv},
}

Note that each test in GADME dataset has its own citation. Please see the source to see
the correct citation for each contained dataset. Each file in the training dataset also has its own recordist. The licenses can be found in the metadata.