text
stringlengths
12
14
p1lutabubu_15
29z1et8j3d_8
cl4nlhot3s_1
d03a58wmnc_5
zoubsrxa25_2
wxxccrv7zr_251
wxxccrv7zr_66
la9b26lzb5_77
rxn378l6s7_6
3me8b1kjfb_3
629acg5ohg_2
za8zaa8kch_10
foheeeoajx_6
k9eggm10es_36
9cdj4kx65i_9
gziaq0pfhp_11
zx2py6gj8r_0
ze2og4grn3_13
s7wvdlp1s4_30
7s45hlgpvc_29
2iffqv55pw_2
29e30gojnf_7
k24r09le79_0
x6spfg39xn_20
r0inimoekj_4
4kr16zyvqh_1
i834xnpka0_3
wxxccrv7zr_49
9u88oo3xem_16
l90wih8tgi_4
32pxzp547k_2
la9b26lzb5_6
oav28xgaki_40
2c3hquvz06_3
4a61b1plgq_13
pcdjhnxhdu_7
2c3hquvz06_0
nul9uzkrtc_9
k9eggm10es_10
2vc07br7g9_7
0k54px7w0e_24
q4rowpzjsd_45
lx5yo50g2r_3
gf2erh8b3u_105
sn5uc3tlg9_45
xorj8mohrq_34
la9b26lzb5_34
hqkr29rqtd_33
0k54px7w0e_30
3d0yhxkzc7_19
rbzq9hkzgj_0
jgca6o0f2j_7
b91z7kv1mp_2
3a08lazy9l_1
wxxccrv7zr_200
vzy2khrclg_9
oav28xgaki_101
f6vb77nayd_50
aakdotuuxe_3
zbasvwr2nj_1
5eg501oaww_32
oav28xgaki_83
07vn6gbo2y_8
wxxccrv7zr_263
17s13e5q0t_20
7s0el6ioho_28
muicpcvr7t_11
ze2og4grn3_2
uf4yi55691_3
l7mf4k94as_8
hjui92m6qs_0
9l1ka2vgj7_57
la9b26lzb5_35
l7mf4k94as_10
2iffqv55pw_39
oav28xgaki_112
tmcogdycv1_16
6j3t2pwc5r_3
17s13e5q0t_76
on1oy80fu9_0
l7mf4k94as_146
6pl8hwhvsg_0
2iffqv55pw_36
gf2erh8b3u_61
wxxccrv7zr_213
06d3dvjzoh_12
7xv618s1of_17
s7wvdlp1s4_20
3rzxo0141d_6
4a61b1plgq_46
sx9nixn7gp_3
zy1x3ftivv_16
ipbhsfd3bi_12
ezuhu3nj68_1
8bqlihab39_46
oqpqobikmn_4
tfq4utbozt_25
miolx0dlhy_7
2st2xnqzyx_16
gziaq0pfhp_10

Dataset Card for Dataset Name

SAMSEMO: New dataset for multilingual and multimodal emotion recognition

Dataset Details

Dataset Sources

Dataset Structure

SAMSEMO/
├── data - zipped directories for each language with files: jpg, mp4, wav
│   ├── pkl_files - files in the pkl format (each language directory from data directory after processing to pkl format)
├── metadata - directory with metadata
├── samsemo.tsv - metadata file (described below)
└── splits - txt files with splits (list of ids) for each language

Annotations

SAMSEMO metadata file is a .tsv file containing several columns:

  • utterance_id – alphanumerical id of the video scene. It consists of ID of the source video followed by the underscore and the number indicating the scene (utterance taken from a given movie)
  • movie_title – the title of the source video, according to the website it was taken from
  • movie_link – the link leading to the source video source_scene_start, source_scene_stop – the beginning and ending of the scene determined in the preliminary annotation. The annotators provided time in hh:mm:ss format, without milliseconds. We cut out the scenes, determining the start on the beginning of the first second (ss.00), and the end on the end of the last second (ss.99). Later on, the scenes were adjusted to eliminate the redundant fragments.
  • language – the language of the scene: EN = English, DE = German, ES = Spanish; PL = Polish, KO = Korean
  • sex – sex of the speaker identified by the annotators (not confirmed by the speaker – see DISCLAIMER). Possible labels: male, female, other.
  • age – approximate age of the speaker identified by the annotators (not confirmed by the speaker – see DISCLAIMER). Possible labels: adolescent, adult, elderly.
  • race – race of the speaker identified by the annotators (not confirmed by the speaker – see DISCLAIMER). Possible labels: asian, black, hispanic, white, other.
  • covered_face – label indicating if speaker’s face is partially covered, e.g. by their hands, scarf, face mask etc. No = the face is not covered, Yes = the face is covered
  • multiple_faces – label indicating if the is one person or more shown in the scene. No = one person, Yes = multiple people.
  • emotion_1_annotator_1, emotion_2_annotator_1 – emotion labels assigned to the scene by the first annotator.
  • emotion_1_annotator_2, emotion_2_annotator_2 -– emotion labels assigned to the scene by the second annotator.
  • emotion_1_annotator_3, emotion_2_annotator_3 – emotion labels assigned to the scene by the third annotator.
  • aggregated_emotions – final emotions assigned to the video scene. If two or three annotators assigned a certain label to the scene, this label is included in the final aggregation, hence is present in this column.
  • annotator_1, annotator_2, annotator_3 – anonymized IDs of the annotators.
  • transcript – the text of the utterance from the scene. It is an output of the ASR, subsequently verified manually.
  • translation_de, translation_en, translation_es, translation_ko , translation_pl – the translation of the text to other languages used in this dataset. Note that this is has been done by the machine translation engine and has not been manually verified.
  • duration – the duration of the scene in the following format: hh:mm:ss.ms
  • movie_type – the type of the source video from which the scene was taken. Possible categories: advertisement, debate, documentary, interview, lecture, monologue, movie, news, speech, stand-up, theatrical play, vlog, web or TV show, workout.
  • license – the license under which we share the video scene. Note that the metadata are shared under the CC BY-NC-SA 4.0 license (see DISCLAIMER).
  • author – the author of the video, identified by us to the best of our knowledge on the basis of the data provided on the websites from which the videos were taken.

DISCLAIMER

  1. Please note that the metadata provided for each scene include labels referring to gender of the speakers. The annotators were asked to provide such labels so that SAMSEMO could be verified in terms of gender representation (males 57.32%, females 42.51%, other 0.17%). The same applies to race information: annotators were asked to label the presumed race of the speakers using a restricted number of labels so that SAMSEMO could be assessed in terms of racial representation (we did not have access to self-reports of speakers in this regard). We acknowledge that both concepts are shaped by social and cultural circumstances and the labels provided in SAMSEMO are based on subjective perceptions and individual experience of annotators. Thus, the metadata provided should be approached very carefully in future studies.
  2. The movie license information provided in SAMSEMO has been collected with due diligence. All video material is shared under its original licenses. However, if any video materials included in the SAMSEMO dataset infringe your copyright by any means, please send us a takedown notice containing the movie title(s) and movie link(s). Please include also a statement by you under penalty or perjury that the information in your notice is accurate and that you are the copyright owner or authorized to act on the copyright owner's behalf.
  3. All SAMSEMO metadata (emotion annotation, transcript and speaker information) are shared under the CC BY-NC-SA 4.0 license.

Citation

@inproceedings{samsemo24_interspeech,
  title     = {SAMSEMO: New dataset for multilingual and multimodal emotion recognition},
  author    = {Pawel Bujnowski and Bartlomiej Kuzma and Bartlomiej Paziewski and Jacek Rutkowski and Joanna Marhula and Zuzanna Bordzicka and Piotr Andruszkiewicz},
  year      = {2024},
  booktitle = {Interspeech 2024},
  pages     = {2925--2929},
  doi       = {10.21437/Interspeech.2024-212},
}
Downloads last month
134
Edit dataset card