The dataset viewer is taking too long to fetch the data. Try to refresh this page.
Error code: ClientConnectionError
Faces in Event Streams (FES): An Annotated Face Dataset for Event Cameras
Figure 1: Faces and facial landmarks detected using the model produced by the given project.
Our dataset contains 689 minutes of recorded event streams, and 1.6 million annotated faces with bounding box and five point facial landmarks.
Dataset description
Figure 2: File structure of the FES dataset, with green representing an event stream and blue representing annotations: a)The preprocessed data are divided into three folders, with each folder containing only bounding box annotations,both bounding box and facial landmark annotations, and event streams in the h5 format. The raw dataset contains lab and wild folders with raw videos and annotations. b)Each controlled experiment (Lab) file has an individual subject ID and an experiment ID. Each file in the uncontrolled (Wild) dataset contains a scene ID that provides information about the location of a recording and the number (ID) of an experiment.
The final dataset contains both the originally collected raw files and the preprocessed data. To produce preprocessed data out of raw files, the reader can refer to preprocessing folder of this repo. The raw files contain video in the “raw” format that can be rendered, and annotations in the “xml” format. Meanwhile, the converted files contain a dataset ready for machine learning training in the “npy” format, annotations for bounding box and facial landmarks, and “h5” files representing the Python binary format to work with the event stream data as an array.
The integration of event streams with annotated labels was based on the time dimension. Since events were recorded at microsecond precision, the timeline of the labels was also converted to microseconds, although it originally had millisecond precision and was derived based on a frame number and frame rate of 30 Hz.
If you use the dataset/source code/pre-trained models in your research, please cite our work:
@Article{s24051409,
AUTHOR = {Bissarinova, Ulzhan and Rakhimzhanova, Tomiris and Kenzhebalin, Daulet and Varol, Huseyin Atakan},
TITLE = {Faces in Event Streams (FES): An Annotated Face Dataset for Event Cameras},
JOURNAL = {Sensors},
VOLUME = {24},
YEAR = {2024},
NUMBER = {5},
ARTICLE-NUMBER = {1409},
URL = {https://www.mdpi.com/1424-8220/24/5/1409},
ISSN = {1424-8220},
DOI = {10.3390/s24051409}
}
- Downloads last month
- 57