|
|
|
# how this build script and dataset_infos.json were generated |
|
|
|
# this is translation, so let's adapt flores - it has an almost identical input format, just files are named differently: |
|
cp https://github.com/huggingface/datasets/blob/master/datasets/flores/flores.py wmt14-en-de-pre-processed.py |
|
perl -pi -e 's|Flores|wmt14-en-de-pre-processed|g' wmt14-en-de-pre-processed.py |
|
|
|
(good models for other tasks can be found here: https://huggingface.co/docs/datasets/add_dataset.html#dataset-scripts-of-reference) |
|
|
|
# then edit to change the language pairs, file template and the data url |
|
git add wmt14-en-de-pre-processed.py |
|
git commit -m "build script" wmt14-en-de-pre-processed.py |
|
git push |
|
|
|
# finally test |
|
datasets-cli test stas/wmt14-en-de-pre-processed --save_infos --all_configs |
|
|
|
# add push the generated config |
|
git add dataset_infos.json |
|
git commit -m "add dataset_infos.json" dataset_infos.json |
|
git push |
|
|