Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Job manager was killed while running this job (job exceeded maximum duration).
Error code:   JobManagerExceededMaximumDurationError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

audio
audio
label
class label
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
End of preview.
YAML Metadata Warning: The task_ids "other-audio-slot-filling" is not in the official list: acceptability-classification, entity-linking-classification, fact-checking, intent-classification, language-identification, multi-class-classification, multi-label-classification, multi-input-text-classification, natural-language-inference, semantic-similarity-classification, sentiment-classification, topic-classification, semantic-similarity-scoring, sentiment-scoring, sentiment-analysis, hate-speech-detection, text-scoring, named-entity-recognition, part-of-speech, parsing, lemmatization, word-sense-disambiguation, coreference-resolution, extractive-qa, open-domain-qa, closed-domain-qa, news-articles-summarization, news-articles-headline-generation, dialogue-modeling, dialogue-generation, conversational, language-modeling, text-simplification, explanation-generation, abstractive-qa, open-domain-abstractive-qa, closed-domain-qa, open-book-qa, closed-book-qa, slot-filling, masked-language-modeling, keyword-spotting, speaker-identification, audio-intent-classification, audio-emotion-recognition, audio-language-identification, multi-label-image-classification, multi-class-image-classification, face-detection, vehicle-detection, instance-segmentation, semantic-segmentation, panoptic-segmentation, image-captioning, image-inpainting, image-colorization, super-resolution, grasping, task-planning, tabular-multi-class-classification, tabular-multi-label-classification, tabular-single-column-regression, rdf-to-text, multiple-choice-qa, multiple-choice-coreference-resolution, document-retrieval, utterance-retrieval, entity-linking-retrieval, fact-checking-retrieval, univariate-time-series-forecasting, multivariate-time-series-forecasting, visual-question-answering, document-question-answering

Freesound Dataset 50k (FSD50K)

Important

This data set is a copy from the original one located at Zenodo.

Citation

If you use the FSD50K dataset, or part of it, please cite our paper:

Eduardo Fonseca, Xavier Favory, Jordi Pons, Frederic Font, Xavier Serra. "FSD50K: an Open Dataset of Human-Labeled Sound Events", arXiv 2020.

Data curators

Eduardo Fonseca, Xavier Favory, Jordi Pons, Mercedes Collado, Ceren Can, Rachit Gupta, Javier Arredondo, Gary Avendano and Sara Fernandez

Contact

You are welcome to contact Eduardo Fonseca should you have any questions at eduardo.fonseca@upf.edu.

About FSD50K

Freesound Dataset 50k (or FSD50K for short) is an open dataset of human-labeled sound events containing 51,197 Freesound clips unequally distributed in 200 classes drawn from the AudioSet Ontology [1]. FSD50K has been created at the Music Technology Group of Universitat Pompeu Fabra.

What follows is a brief summary of FSD50K's most important characteristics. Please have a look at our paper (especially Section 4) to extend the basic information provided here with relevant details for its usage, as well as discussion, limitations, applications and more.

Basic characteristics:

  • FSD50K is composed mainly of sound events produced by physical sound sources and production mechanisms.
  • Following AudioSet Ontology’s main families, the FSD50K vocabulary encompasses mainly Human sounds, Sounds of things, Animal, Natural sounds and Music.
  • The dataset has 200 sound classes (144 leaf nodes and 56 intermediate nodes) hierarchically organized with a subset of the AudioSet Ontology. The vocabulary can be inspected in vocabulary.csv (see Files section below).
  • FSD50K contains 51,197 audio clips totalling 108.3 hours of audio.
  • The audio content has been manually labeled by humans following a data labeling process using the Freesound Annotator platform [2].
  • Clips are of variable length from 0.3 to 30s, due to the diversity of the sound classes and the preferences of Freesound users when recording sounds.
  • Ground truth labels are provided at the clip-level (i.e., weak labels).
  • The dataset poses mainly a multi-label sound event classification problem (but also allows a variety of sound event research tasks, see Sec. 4D).
  • All clips are provided as uncompressed PCM 16 bit 44.1 kHz mono audio files.
  • The audio clips are grouped into a development (dev) set and an evaluation (eval) set such that they do not have clips from the same Freesound uploader.

Dev set:

  • 40,966 audio clips totalling 80.4 hours of audio
  • Avg duration/clip: 7.1s
  • 114,271 smeared labels (i.e., labels propagated in the upwards direction to the root of the ontology)
  • Labels are correct but could be occasionally incomplete
  • A train/validation split is provided (Sec. 3H). If a different split is used, it should be specified for reproducibility and fair comparability of results (see Sec. 5C of our paper)

Eval set:

  • 10,231 audio clips totalling 27.9 hours of audio
  • Avg duration/clip: 9.8s
  • 38,596 smeared labels
  • Eval set is labeled exhaustively (labels are correct and complete for the considered vocabulary)

NOTE: All classes in FSD50K are represented in AudioSet, except Crash cymbal, Human group actions, Human voice, Respiratory sounds, and Domestic sounds, home sounds.

License

All audio clips in FSD50K are released under Creative Commons (CC) licenses. Each clip has its own license as defined by the clip uploader in Freesound, some of them requiring attribution to their original authors and some forbidding further commercial reuse. For attribution purposes and to facilitate attribution of these files to third parties, we include a mapping from the audio clips to their corresponding licenses. The licenses are specified in the files dev_clips_info_FSD50K.json and eval_clips_info_FSD50K.json. These licenses are CC0, CC-BY, CC-BY-NC and CC Sampling+.

In addition, FSD50K as a whole is the result of a curation process and it has an additional license: FSD50K is released under CC-BY. This license is specified in the LICENSE-DATASET file downloaded with the FSD50K.doc zip file.

Files

FSD50K can be downloaded as a series of zip files with the following directory structure:

root
β”‚  
└───clips/                Audio clips
β”‚   β”‚  
β”‚   └───  dev/                   Audio clips in the dev set
β”‚   β”‚
β”‚   └─── eval/                  Audio clips in the eval set
β”‚   
└───labels/                Files for FSD50K's ground truth
β”‚   β”‚   
β”‚   └─── dev.csv            				  Ground truth for the dev set
β”‚   β”‚        
β”‚   └─── eval.csv          				      Ground truth for the eval set            
β”‚   β”‚            
β”‚   └─── vocabulary.csv                       List of 200 sound classes in FSD50K 
β”‚   
└───metadata/                    Files for additional metadata
β”‚   β”‚            
β”‚   └─── class_info_FSD50K.json               Metadata about the sound classes
β”‚   β”‚            
β”‚   └─── dev_clips_info_FSD50K.json           Metadata about the dev clips
β”‚   β”‚            
β”‚   └─── eval_clips_info_FSD50K.json          Metadata about the eval clips
β”‚   β”‚            
β”‚   └─── pp_pnp_ratings_FSD50K.json           PP/PNP ratings    
β”‚   β”‚            
β”‚   └─── collection/                          Files for the *sound collection* format   
β”‚   
β”‚            
└───README.md                             The dataset description file that you are reading
β”‚            
└───LICENSE-DATASET                       License of the FSD50K dataset as an entity   

Each row (i.e. audio clip) of dev.csv contains the following information:

  • fname: the file name without the .wav extension, e.g., the fname 64760 corresponds to the file 64760.wav in disk. This number is the Freesound id. We always use Freesound ids as filenames.
  • labels: the class labels (i.e., the ground truth). Note these class labels are smeared, i.e., the labels have been propagated in the upwards direction to the root of the ontology. More details about the label smearing process can be found in Appendix D of our paper.
  • mids: the Freebase identifiers corresponding to the class labels, as defined in the AudioSet Ontology specification
  • split: whether the clip belongs to train or val (see paper for details on the proposed split)

Rows in eval.csv follow the same format, except that there is no split column.

NOTE: We use a slightly different format than AudioSet for the naming of class labels in order to avoid potential problems with spaces, commas, etc. Example: we use Accelerating_and_revving_and_vroom instead of the original Accelerating, revving, vroom. You can go back to the original AudioSet naming using the information provided in vocabulary.csv (class label and mid for the 200 classes of FSD50K) and the AudioSet Ontology specification.

Files with additional metadata (metadata/)

To allow a variety of analysis and approaches with FSD50K, we provide the following metadata:

  1. class_info_FSD50K.json: python dictionary where each entry corresponds to one sound class and contains: FAQs utilized during the annotation of the class, examples (representative audio clips), and verification_examples (audio clips presented to raters during annotation as a quality control mechanism). Audio clips are described by the Freesound id. NOTE: It may be that some of these examples are not included in the FSD50K release.

  2. dev_clips_info_FSD50K.json: python dictionary where each entry corresponds to one dev clip and contains: title, description, tags, clip license, and the uploader name. All these metadata are provided by the uploader.

  3. eval_clips_info_FSD50K.json: same as before, but with eval clips.

  4. pp_pnp_ratings.json: python dictionary where each entry corresponds to one clip in the dataset and contains the PP/PNP ratings for the labels associated with the clip. More specifically, these ratings are gathered for the labels validated in the validation task (Sec. 3 of paper). This file includes 59,485 labels for the 51,197 clips in FSD50K. Out of these labels:

  • 56,095 labels have inter-annotator agreement (PP twice, or PNP twice). Each of these combinations can be occasionally accompanied by other (non-positive) ratings.
  • 3390 labels feature other rating configurations such as i) only one PP rating and one PNP rating (and nothing else). This can be considered inter-annotator agreement at the ``Present” level; ii) only one PP rating (and nothing else); iii) only one PNP rating (and nothing else).

Ratings' legend: PP=1; PNP=0.5; U=0; NP=-1.

NOTE: The PP/PNP ratings have been provided in the validation task. Subsequently, a subset of these clips corresponding to the eval set was exhaustively labeled in the refinement task, hence receiving additional labels in many cases. For these eval clips, you might want to check their labels in eval.csv in order to have more info about their audio content (see Sec. 3 for details).

  1. collection/: This folder contains metadata for what we call the sound collection format. This format consists of the raw annotations gathered, featuring all generated class labels without any restriction.

We provide the collection format to make available some annotations that do not appear in the FSD50K ground truth release. This typically happens in the case of classes for which we gathered human-provided annotations, but that were discarded in the FSD50K release due to data scarcity (more specifically, they were merged with their parents). In other words, the main purpose of the collection format is to make available annotations for tiny classes. The format of these files in analogous to that of the files in FSD50K.ground_truth/. A couple of examples show the differences between collection and ground truth formats:

clip: labels_in_collection -- labels_in_ground_truth

51690: Owl -- Bird,Wild_Animal,Animal

190579: Toothbrush,Electric_toothbrush -- Domestic_sounds_and_home_sounds

In the first example, raters provided the label Owl. However, due to data scarcity, Owl labels were merged into their parent Bird. Then, labels Wild_Animal,Animal were added via label propagation (smearing). The second example shows one of the most extreme cases, where raters provided the labels Electric_toothbrush,Toothbrush, which both had few data. Hence, they were merged into Toothbrush's parent, which unfortunately is Domestic_sounds_and_home_sounds (a rather vague class containing a variety of children sound classes).

NOTE: Labels in the collection format are not smeared.
NOTE: While in FSD50K's ground truth the vocabulary encompasses 200 classes (common for dev and eval), since the collection format is composed of raw annotations, the vocabulary here is much larger (over 350 classes), and it is slightly different in dev and eval.

For further questions, please contact eduardo.fonseca@upf.edu, or join the freesound-annotator Google Group.

Download

Clone this repository:

git clone https://huggingface.co/Fhrozen/FSD50k

Baseline System

Several baseline systems for FSD50K are available at https://github.com/edufonseca/FSD50K_baseline. The experiments are described in Sec 5 of our paper.

References and links

[1] Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter. "Audio set: An ontology and human-labeled dataset for audio events." In Proceedings of the International Conference on Acoustics, Speech and Signal Processing, 2017. [PDF]

[2] Eduardo Fonseca, Jordi Pons, Xavier Favory, Frederic Font, Dmitry Bogdanov, Andres Ferraro, Sergio Oramas, Alastair Porter, and Xavier Serra. "Freesound Datasets: A Platform for the Creation of Open Audio Datasets." In Proceedings of the International Conference on Music Information Retrieval, 2017. [PDF]

Companion site for FSD50K: https://annotator.freesound.org/fsd/release/FSD50K/
Freesound Annotator: https://annotator.freesound.org/
Freesound: https://freesound.org
Eduardo Fonseca's personal website: http://www.eduardofonseca.net/
More datasets collected by us: http://www.eduardofonseca.net/datasets/

Acknowledgments

The authors would like to thank everyone who contributed to FSD50K with annotations, and especially Mercedes Collado, Ceren Can, Rachit Gupta, Javier Arredondo, Gary Avendano and Sara Fernandez for their commitment and perseverance. The authors would also like to thank Daniel P.W. Ellis and Manoj Plakal from Google Research for valuable discussions. This work is partially supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 688382 AudioCommons, and two Google Faculty Research Awards 2017 and 2018, and the Maria de Maeztu Units of Excellence Programme (MDM-2015-0502).

Downloads last month
1,336