chocobearz commited on
Commit
71b37f6
1 Parent(s): ba9f352

Add metadata and split information

Browse files
Files changed (1) hide show
  1. README.md +64 -3
README.md CHANGED
@@ -25,8 +25,8 @@ We release the BERSt Dataset for various speech recognition tasks including Auto
25
 
26
  ## Overview
27
 
28
- * 9207 single phrase recordings (~12h)
29
- * 96 professional actors
30
  * 19 phone positions
31
  * 7 emotion classes
32
  * 3 vocal intensity levels
@@ -40,4 +40,65 @@ Participants were around the globe and represent varying regional accents in Eng
40
  The data includes 13 non-sense phrases for use cases robust to lingustic context and high surprisal.
41
  Partipants were prompted to speak, raise their voice and shout each phrase while moving their phone to various distances and locations in their home, as well as with various obstructions to the microphone, e.g. in a backpack
42
 
43
- Baseline results of various state-of-the-art methods for ASR and SER show that this dataset remains a challenging task, and we encourage researchers to use this data to fine-tune and benchmark their models in these difficult condition representing possible real world situations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  ## Overview
27
 
28
+ * 4526 single phrase recordings (~12h)
29
+ * 98 professional actors
30
  * 19 phone positions
31
  * 7 emotion classes
32
  * 3 vocal intensity levels
 
40
  The data includes 13 non-sense phrases for use cases robust to lingustic context and high surprisal.
41
  Partipants were prompted to speak, raise their voice and shout each phrase while moving their phone to various distances and locations in their home, as well as with various obstructions to the microphone, e.g. in a backpack
42
 
43
+ Baseline results of various state-of-the-art methods for ASR and SER show that this dataset remains a challenging task, and we encourage researchers to use this data to fine-tune and benchmark their models in these difficult condition representing possible real world situations
44
+
45
+ Affect annotations are those provided to the actors, they have not been validated through perception
46
+ The speech annotations, however, has been checked and adjusted to mistakes in the speech.
47
+
48
+ ## Data splits and organisation
49
+
50
+ For each phone position and phrase, the actors provided a single recording for the three vocal intensity levels, these raw audio files are available
51
+
52
+ Meta-data in csv format corresponds to the files split per utterance, found inside `clean_clips` for each data splits
53
+
54
+ We provide a test, train and validation split
55
+
56
+ There is no speaker cross-over between splits, the train and validation sets each contain 10 speakers not seen in the training set
57
+
58
+ ## Baseline Results
59
+ TBD
60
+
61
+ ## Metadata Details
62
+
63
+ * actor count
64
+ * 98
65
+ * Gender counts
66
+ * Woman: 61
67
+ * Man: 34
68
+ * Non-Binary: 1
69
+ * Prefer not to disclose 2
70
+ * Current daily language counts
71
+ * English: 95
72
+ * Norwegian: 1
73
+ * Russian: 1
74
+ * French: 1
75
+ * First language counts
76
+ * English: 75
77
+ * Non English: 23
78
+ * Spanish: 6
79
+ * French: 3
80
+ * Portuguese: 3
81
+ * Chinese: 2
82
+ * Norwegian: 1
83
+ * Mandarin: 1
84
+ * Tagalog: 1
85
+ * Italian: 1
86
+ * Hungarian: 1
87
+ * Russian: 1
88
+ * Hindi: 1
89
+ * Swahili: 1
90
+ * Croatian: 1
91
+ Pre-split Data counts
92
+ * Emotion counts
93
+ * fear: 236
94
+ * neutral: 234
95
+ * disgust: 232
96
+ * joy: 224
97
+ * anger: 223
98
+ * surprise: 210
99
+ * sadness: 201
100
+ * Distance counts:
101
+ * Near body: 627
102
+ * 1-2m away: 324
103
+ * Other side of room: 316
104
+ * Outside of room: 293