Datasets:
Upload ./README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -33,9 +33,9 @@ This dataset is 1 of 3 used in the paper, the others being:
|
|
33 |
|
34 |
The dataset features 3 parts obtained from the 2 original datasets:
|
35 |
|
36 |
-
- CMU non-
|
37 |
-
- CMU native US English speakers
|
38 |
-
- L2 non-native English speakers
|
39 |
|
40 |
|
41 |
The original ARCTIC samples are used as `human` samples, while `synthetic` samples are generated using [Google Cloud Text-To-Speech](https://cloud.google.com/text-to-speech).
|
@@ -74,7 +74,7 @@ This dataset is best used as a test set for accents. Each sample contains an `Au
|
|
74 |
|
75 |
### Direct Use
|
76 |
|
77 |
-
The following snippet of code demonstrates loading the CMU non-
|
78 |
|
79 |
```python
|
80 |
from datasets import load_dataset
|
@@ -88,8 +88,8 @@ arctic_hs = load_dataset(
|
|
88 |
```
|
89 |
|
90 |
To load a different part, change `cmu_non-us` into one of the following:
|
91 |
-
- `cmu_us` for CMU native US English speakers
|
92 |
-
- `l2` for L2 non-native speakers
|
93 |
|
94 |
This dataset only has a `test` split.
|
95 |
|
|
|
33 |
|
34 |
The dataset features 3 parts obtained from the 2 original datasets:
|
35 |
|
36 |
+
- CMU (native) non-US English speakers
|
37 |
+
- CMU (native) US English speakers
|
38 |
+
- L2 (non-native) English speakers
|
39 |
|
40 |
|
41 |
The original ARCTIC samples are used as `human` samples, while `synthetic` samples are generated using [Google Cloud Text-To-Speech](https://cloud.google.com/text-to-speech).
|
|
|
74 |
|
75 |
### Direct Use
|
76 |
|
77 |
+
The following snippet of code demonstrates loading the CMU non-US English speaker part of the dataset:
|
78 |
|
79 |
```python
|
80 |
from datasets import load_dataset
|
|
|
88 |
```
|
89 |
|
90 |
To load a different part, change `cmu_non-us` into one of the following:
|
91 |
+
- `cmu_us` for CMU (native) US English speakers
|
92 |
+
- `l2` for L2 (non-native) English speakers
|
93 |
|
94 |
This dataset only has a `test` split.
|
95 |
|