html_url
stringlengths 47
49
| title
stringlengths 4
111
| comments
stringlengths 71
20.4k
| body
stringlengths 0
12.9k
β | comment_length_in_words
int64 16
1.61k
| text
stringlengths 100
20.5k
|
---|---|---|---|---|---|
https://github.com/huggingface/datasets/pull/2954 | Run tests in parallel | There is a speed up in Windows machines:
- From `13m 52s` to `11m 10s`
In Linux machines, some workers crash with error message:
```
OSError: [Errno 12] Cannot allocate memory
``` | Run CI tests in parallel to speed up the test suite. | 32 | text: Run tests in parallel
Run CI tests in parallel to speed up the test suite.
There is a speed up in Windows machines:
- From `13m 52s` to `11m 10s`
In Linux machines, some workers crash with error message:
```
OSError: [Errno 12] Cannot allocate memory
``` |
https://github.com/huggingface/datasets/pull/2954 | Run tests in parallel | There is also a speed up in Linux machines:
- From `7m 30s` to `5m 32s` | Run CI tests in parallel to speed up the test suite. | 16 | text: Run tests in parallel
Run CI tests in parallel to speed up the test suite.
There is also a speed up in Linux machines:
- From `7m 30s` to `5m 32s` |
https://github.com/huggingface/datasets/pull/2951 | Dummy labels no longer on by default in `to_tf_dataset` | @lhoestq Let me make sure we never need it, and if not then I'll remove it entirely in a follow-up PR. | After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway! | 21 | text: Dummy labels no longer on by default in `to_tf_dataset`
After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway!
@lhoestq Let me make sure we never need it, and if not then I'll remove it entirely in a follow-up PR. |
https://github.com/huggingface/datasets/pull/2951 | Dummy labels no longer on by default in `to_tf_dataset` | Thanks ;) it will be less confusing and easier to maintain to not keep unused hacky features | After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway! | 17 | text: Dummy labels no longer on by default in `to_tf_dataset`
After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway!
Thanks ;) it will be less confusing and easier to maintain to not keep unused hacky features |
https://github.com/huggingface/datasets/pull/2942 | Add SEDE dataset | Thanks @albertvillanova for your great suggestions! I just pushed a new commit with the necessary fixes. For some reason, the test `test_metric_common` failed for `meteor` metric, which doesn't have any connection to this PR, so I'm trying to rebase and see if it helps. | This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006 | 44 | text: Add SEDE dataset
This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006
Thanks @albertvillanova for your great suggestions! I just pushed a new commit with the necessary fixes. For some reason, the test `test_metric_common` failed for `meteor` metric, which doesn't have any connection to this PR, so I'm trying to rebase and see if it helps. |
https://github.com/huggingface/datasets/pull/2942 | Add SEDE dataset | Hi @Hazoom,
You were right: the non-passing test had nothing to do with this PR.
Unfortunately, you did a git rebase (instead of a git merge), which is not recommended once you have already opened a pull request because you mess up your pull request history. You can see that your pull request now contains:
- your commits repeated two times
- and commits which are not yours from the master branch
If you would like to clean your pull request, please make:
```
git reset --hard 587b93a
git fetch upstream master
git merge upstream/master
git push --force origin sede
``` | This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006 | 101 | text: Add SEDE dataset
This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006
Hi @Hazoom,
You were right: the non-passing test had nothing to do with this PR.
Unfortunately, you did a git rebase (instead of a git merge), which is not recommended once you have already opened a pull request because you mess up your pull request history. You can see that your pull request now contains:
- your commits repeated two times
- and commits which are not yours from the master branch
If you would like to clean your pull request, please make:
```
git reset --hard 587b93a
git fetch upstream master
git merge upstream/master
git push --force origin sede
``` |
https://github.com/huggingface/datasets/pull/2942 | Add SEDE dataset | > Hi @Hazoom,
>
> You were right: the non-passing test had nothing to do with this PR.
>
> Unfortunately, you did a git rebase (instead of a git merge), which is not recommended once you have already opened a pull request because you mess up your pull request history. You can see that your pull request now contains:
>
> * your commits repeated two times
> * and commits which are not yours from the master branch
>
> If you would like to clean your pull request, please make:
>
> ```
> git reset --hard 587b93a
> git fetch upstream master
> git merge upstream/master
> git push --force origin sede
> ```
Thanks @albertvillanova | This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006 | 120 | text: Add SEDE dataset
This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006
> Hi @Hazoom,
>
> You were right: the non-passing test had nothing to do with this PR.
>
> Unfortunately, you did a git rebase (instead of a git merge), which is not recommended once you have already opened a pull request because you mess up your pull request history. You can see that your pull request now contains:
>
> * your commits repeated two times
> * and commits which are not yours from the master branch
>
> If you would like to clean your pull request, please make:
>
> ```
> git reset --hard 587b93a
> git fetch upstream master
> git merge upstream/master
> git push --force origin sede
> ```
Thanks @albertvillanova |
https://github.com/huggingface/datasets/pull/2942 | Add SEDE dataset | > Nice! Just one final request before approving your pull request:
>
> As you have updated the "QuerySetId" field data type, the size of the dataset is smaller now. You should regenerate the metadata. Please run:
>
> ```
> rm datasets/sede/dataset_infos.json
> datasets-cli test datasets/sede --save_infos --all_configs
> ```
@albertvillanova Good catch, just fixed it. | This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006 | 57 | text: Add SEDE dataset
This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006
> Nice! Just one final request before approving your pull request:
>
> As you have updated the "QuerySetId" field data type, the size of the dataset is smaller now. You should regenerate the metadata. Please run:
>
> ```
> rm datasets/sede/dataset_infos.json
> datasets-cli test datasets/sede --save_infos --all_configs
> ```
@albertvillanova Good catch, just fixed it. |
https://github.com/huggingface/datasets/pull/2938 | Take namespace into account in caching | We might have collisions if a username and a dataset_name are the same. Maybe instead serialize the dataset name by replacing `/` with some string, eg `__SLASH__`, that will hopefully never appear in a dataset or user name (it's what I did in https://github.com/huggingface/datasets-preview-backend/blob/master/benchmark/scripts/serialize.py. That way, all the datasets are one-level deep directories | Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00 | 53 | text: Take namespace into account in caching
Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00
We might have collisions if a username and a dataset_name are the same. Maybe instead serialize the dataset name by replacing `/` with some string, eg `__SLASH__`, that will hopefully never appear in a dataset or user name (it's what I did in https://github.com/huggingface/datasets-preview-backend/blob/master/benchmark/scripts/serialize.py. That way, all the datasets are one-level deep directories |
https://github.com/huggingface/datasets/pull/2938 | Take namespace into account in caching | IIRC we enforce that no repo id or username can contain `___` (exactly 3 underscores) specifically for this reason, so you can use that string (that we use in other projects)
cc @Pierrci | Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00 | 33 | text: Take namespace into account in caching
Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00
IIRC we enforce that no repo id or username can contain `___` (exactly 3 underscores) specifically for this reason, so you can use that string (that we use in other projects)
cc @Pierrci |
https://github.com/huggingface/datasets/pull/2938 | Take namespace into account in caching | > IIRC we enforce that no repo id or username can contain ___ (exactly 3 underscores) specifically for this reason, so you can use that string (that we use in other projects)
out of curiosity: where is it enforced? | Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00 | 39 | text: Take namespace into account in caching
Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00
> IIRC we enforce that no repo id or username can contain ___ (exactly 3 underscores) specifically for this reason, so you can use that string (that we use in other projects)
out of curiosity: where is it enforced? |
https://github.com/huggingface/datasets/pull/2938 | Take namespace into account in caching | > where is it enforced?
Nowhere yet but we should :) feel free to track in internal tracker and/or implement, as this will be useful in the future | Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00 | 28 | text: Take namespace into account in caching
Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00
> where is it enforced?
Nowhere yet but we should :) feel free to track in internal tracker and/or implement, as this will be useful in the future |
https://github.com/huggingface/datasets/pull/2938 | Take namespace into account in caching | Thanks for the trick, I'm doing the change :)
We can use
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files | Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00 | 21 | text: Take namespace into account in caching
Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00
Thanks for the trick, I'm doing the change :)
We can use
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files |
https://github.com/huggingface/datasets/pull/2935 | Add Jigsaw unintended Bias | Note that the tests seem to fail because of a bug in an Exception at the moment, see: https://github.com/huggingface/datasets/pull/2936 for the fix | Hi,
Here's a first attempt at this dataset. Would be great if it could be merged relatively quickly as it is needed for Bigscience-related stuff.
This requires manual download, and I had some trouble generating dummy_data in this setting, so welcoming feedback there. | 22 | text: Add Jigsaw unintended Bias
Hi,
Here's a first attempt at this dataset. Would be great if it could be merged relatively quickly as it is needed for Bigscience-related stuff.
This requires manual download, and I had some trouble generating dummy_data in this setting, so welcoming feedback there.
Note that the tests seem to fail because of a bug in an Exception at the moment, see: https://github.com/huggingface/datasets/pull/2936 for the fix |
https://github.com/huggingface/datasets/pull/2931 | Fix bug in to_tf_dataset | I'm going to merge it, but yeah - hopefully the CI runner just cleans that up automatically and few other people run the tests on Windows anyway! | Replace `set_format()` to `with_format()` so that we don't alter the original dataset in `to_tf_dataset()` | 27 | text: Fix bug in to_tf_dataset
Replace `set_format()` to `with_format()` so that we don't alter the original dataset in `to_tf_dataset()`
I'm going to merge it, but yeah - hopefully the CI runner just cleans that up automatically and few other people run the tests on Windows anyway! |
https://github.com/huggingface/datasets/pull/2925 | Add tutorial for no-code dataset upload | Cool, love it ! :)
Feel free to add a paragraph saying how to load the dataset:
```python
from datasets import load_dataset
dataset = load_dataset("stevhliu/demo")
# or to separate each csv file into several splits
data_files = {"train": "train.csv", "test": "test.csv"}
dataset = load_dataset("stevhliu/demo", data_files=data_files)
print(dataset["train"][0])
``` | This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dataset without accessing the terminal or knowing Git. | 47 | text: Add tutorial for no-code dataset upload
This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dataset without accessing the terminal or knowing Git.
Cool, love it ! :)
Feel free to add a paragraph saying how to load the dataset:
```python
from datasets import load_dataset
dataset = load_dataset("stevhliu/demo")
# or to separate each csv file into several splits
data_files = {"train": "train.csv", "test": "test.csv"}
dataset = load_dataset("stevhliu/demo", data_files=data_files)
print(dataset["train"][0])
``` |
https://github.com/huggingface/datasets/pull/2925 | Add tutorial for no-code dataset upload | Perfect, feel free to mark this PR ready for review :)
cc @albertvillanova do you have any comment ? You can check the tutorial here:
https://47389-250213286-gh.circle-artifacts.com/0/docs/_build/html/no_code_upload.html
Maybe we can just add a list of supported file types:
- csv
- json
- json lines
- text
- parquet | This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dataset without accessing the terminal or knowing Git. | 48 | text: Add tutorial for no-code dataset upload
This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dataset without accessing the terminal or knowing Git.
Perfect, feel free to mark this PR ready for review :)
cc @albertvillanova do you have any comment ? You can check the tutorial here:
https://47389-250213286-gh.circle-artifacts.com/0/docs/_build/html/no_code_upload.html
Maybe we can just add a list of supported file types:
- csv
- json
- json lines
- text
- parquet |
https://github.com/huggingface/datasets/pull/2916 | Add OpenAI's pass@k code evaluation metric | > The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in datasets?
It should work normally, but feel free to test it.
There is some documentation about using metrics in a distributed setup that uses multiprocessing [here](https://huggingface.co/docs/datasets/loading.html?highlight=rank#distributed-setup)
You can test to spawn several processes where each process would load the metric. Then in each process you add some references/predictions to the metric. Finally you call compute() in each process and on process 0 it should return the result on all the references/predictions
Let me know if you have questions or if I can help | This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https://github.com/openai/human-eval) introduced in the [Codex paper](https://arxiv.org/abs/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`/`references` convention.
The addition of this metric should enable the evaluation against the code evaluation datasets added in #2897 and #2893.
A few open questions:
- The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in `datasets`?
- This metric executes generated Python code and as such it poses dangers of executing malicious code. OpenAI addresses this issue by 1) commenting the `exec` call in the code so the user has to actively uncomment it and read the warning and 2) suggests using a sandbox environment (gVisor container). Should we add a similar safeguard? E.g. a prompt that needs to be answered when initialising the metric? Or at least a warning message?
- Naming: the implementation sticks to the `predictions`/`references` naming, however, the references are not reference solutions but unittest to test the solution. While reference solutions are also available they are not used. Should the naming be adapted? | 105 | text: Add OpenAI's pass@k code evaluation metric
This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https://github.com/openai/human-eval) introduced in the [Codex paper](https://arxiv.org/abs/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`/`references` convention.
The addition of this metric should enable the evaluation against the code evaluation datasets added in #2897 and #2893.
A few open questions:
- The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in `datasets`?
- This metric executes generated Python code and as such it poses dangers of executing malicious code. OpenAI addresses this issue by 1) commenting the `exec` call in the code so the user has to actively uncomment it and read the warning and 2) suggests using a sandbox environment (gVisor container). Should we add a similar safeguard? E.g. a prompt that needs to be answered when initialising the metric? Or at least a warning message?
- Naming: the implementation sticks to the `predictions`/`references` naming, however, the references are not reference solutions but unittest to test the solution. While reference solutions are also available they are not used. Should the naming be adapted?
> The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in datasets?
It should work normally, but feel free to test it.
There is some documentation about using metrics in a distributed setup that uses multiprocessing [here](https://huggingface.co/docs/datasets/loading.html?highlight=rank#distributed-setup)
You can test to spawn several processes where each process would load the metric. Then in each process you add some references/predictions to the metric. Finally you call compute() in each process and on process 0 it should return the result on all the references/predictions
Let me know if you have questions or if I can help |
https://github.com/huggingface/datasets/pull/2916 | Add OpenAI's pass@k code evaluation metric | Is there a good way to debug the Windows tests? I suspect it is an issue with `multiprocessing`, but I can't see the error messages. | This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https://github.com/openai/human-eval) introduced in the [Codex paper](https://arxiv.org/abs/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`/`references` convention.
The addition of this metric should enable the evaluation against the code evaluation datasets added in #2897 and #2893.
A few open questions:
- The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in `datasets`?
- This metric executes generated Python code and as such it poses dangers of executing malicious code. OpenAI addresses this issue by 1) commenting the `exec` call in the code so the user has to actively uncomment it and read the warning and 2) suggests using a sandbox environment (gVisor container). Should we add a similar safeguard? E.g. a prompt that needs to be answered when initialising the metric? Or at least a warning message?
- Naming: the implementation sticks to the `predictions`/`references` naming, however, the references are not reference solutions but unittest to test the solution. While reference solutions are also available they are not used. Should the naming be adapted? | 25 | text: Add OpenAI's pass@k code evaluation metric
This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https://github.com/openai/human-eval) introduced in the [Codex paper](https://arxiv.org/abs/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`/`references` convention.
The addition of this metric should enable the evaluation against the code evaluation datasets added in #2897 and #2893.
A few open questions:
- The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in `datasets`?
- This metric executes generated Python code and as such it poses dangers of executing malicious code. OpenAI addresses this issue by 1) commenting the `exec` call in the code so the user has to actively uncomment it and read the warning and 2) suggests using a sandbox environment (gVisor container). Should we add a similar safeguard? E.g. a prompt that needs to be answered when initialising the metric? Or at least a warning message?
- Naming: the implementation sticks to the `predictions`/`references` naming, however, the references are not reference solutions but unittest to test the solution. While reference solutions are also available they are not used. Should the naming be adapted?
Is there a good way to debug the Windows tests? I suspect it is an issue with `multiprocessing`, but I can't see the error messages. |
https://github.com/huggingface/datasets/pull/2906 | feat: πΈ add a function to get a dataset config's split names | > Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos)
Yes totally :) This tutorial should indeed mention this, given how fundamental it is | Also: pass additional arguments (use_auth_token) to get private configs + info of private datasets on the hub
Questions:
- <strike>I'm not sure how the versions work: I changed 1.12.1.dev0 to 1.12.1.dev1, was it correct?</strike> no -> reverted
- Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos) | 28 | text: feat: πΈ add a function to get a dataset config's split names
Also: pass additional arguments (use_auth_token) to get private configs + info of private datasets on the hub
Questions:
- <strike>I'm not sure how the versions work: I changed 1.12.1.dev0 to 1.12.1.dev1, was it correct?</strike> no -> reverted
- Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos)
> Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos)
Yes totally :) This tutorial should indeed mention this, given how fundamental it is |
https://github.com/huggingface/datasets/pull/2897 | Add OpenAI's HumanEval dataset | I just fixed the class name, and added `[More Information Needed]` in empty sections in case people want to complete the dataset card :) | This PR adds OpenAI's [HumanEval](https://github.com/openai/human-eval) dataset. The dataset consists of 164 handcrafted programming problems with solutions and unittests to verify solution. This dataset is useful to evaluate code generation models. | 24 | text: Add OpenAI's HumanEval dataset
This PR adds OpenAI's [HumanEval](https://github.com/openai/human-eval) dataset. The dataset consists of 164 handcrafted programming problems with solutions and unittests to verify solution. This dataset is useful to evaluate code generation models.
I just fixed the class name, and added `[More Information Needed]` in empty sections in case people want to complete the dataset card :) |
https://github.com/huggingface/datasets/pull/2884 | Add IC, SI, ER tasks to SUPERB | Sorry for the late PR, uploading 10+GB files to the hub through a VPN was an adventure :sweat_smile: | This PR adds 3 additional classification tasks to SUPERB
#### Intent Classification
Dataset URL seems to be down at the moment :( See the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands
#### Speaker Identification
Manual download script:
```
mkdir VoxCeleb1
cd VoxCeleb1
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
cat vox1_dev* > vox1_dev_wav.zip
unzip vox1_dev_wav.zip
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
unzip vox1_test_wav.zip
# download the official SUPERB train-dev-test split
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
```
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification
#### Intent Classification
Manual download requires going through a slow application process, see the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition
#### :warning: Note
These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main | 18 | text: Add IC, SI, ER tasks to SUPERB
This PR adds 3 additional classification tasks to SUPERB
#### Intent Classification
Dataset URL seems to be down at the moment :( See the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands
#### Speaker Identification
Manual download script:
```
mkdir VoxCeleb1
cd VoxCeleb1
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
cat vox1_dev* > vox1_dev_wav.zip
unzip vox1_dev_wav.zip
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
unzip vox1_test_wav.zip
# download the official SUPERB train-dev-test split
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
```
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification
#### Intent Classification
Manual download requires going through a slow application process, see the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition
#### :warning: Note
These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main
Sorry for the late PR, uploading 10+GB files to the hub through a VPN was an adventure :sweat_smile: |
https://github.com/huggingface/datasets/pull/2884 | Add IC, SI, ER tasks to SUPERB | Thank you so much for adding these subsets @anton-l!
> These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main
Are we allowed to make these datasets public or would that violate the terms of their use? | This PR adds 3 additional classification tasks to SUPERB
#### Intent Classification
Dataset URL seems to be down at the moment :( See the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands
#### Speaker Identification
Manual download script:
```
mkdir VoxCeleb1
cd VoxCeleb1
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
cat vox1_dev* > vox1_dev_wav.zip
unzip vox1_dev_wav.zip
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
unzip vox1_test_wav.zip
# download the official SUPERB train-dev-test split
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
```
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification
#### Intent Classification
Manual download requires going through a slow application process, see the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition
#### :warning: Note
These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main | 47 | text: Add IC, SI, ER tasks to SUPERB
This PR adds 3 additional classification tasks to SUPERB
#### Intent Classification
Dataset URL seems to be down at the moment :( See the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands
#### Speaker Identification
Manual download script:
```
mkdir VoxCeleb1
cd VoxCeleb1
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
cat vox1_dev* > vox1_dev_wav.zip
unzip vox1_dev_wav.zip
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
unzip vox1_test_wav.zip
# download the official SUPERB train-dev-test split
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
```
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification
#### Intent Classification
Manual download requires going through a slow application process, see the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition
#### :warning: Note
These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main
Thank you so much for adding these subsets @anton-l!
> These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main
Are we allowed to make these datasets public or would that violate the terms of their use? |
https://github.com/huggingface/datasets/pull/2884 | Add IC, SI, ER tasks to SUPERB | @lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us.
For example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :( | This PR adds 3 additional classification tasks to SUPERB
#### Intent Classification
Dataset URL seems to be down at the moment :( See the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands
#### Speaker Identification
Manual download script:
```
mkdir VoxCeleb1
cd VoxCeleb1
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
cat vox1_dev* > vox1_dev_wav.zip
unzip vox1_dev_wav.zip
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
unzip vox1_test_wav.zip
# download the official SUPERB train-dev-test split
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
```
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification
#### Intent Classification
Manual download requires going through a slow application process, see the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition
#### :warning: Note
These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main | 69 | text: Add IC, SI, ER tasks to SUPERB
This PR adds 3 additional classification tasks to SUPERB
#### Intent Classification
Dataset URL seems to be down at the moment :( See the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands
#### Speaker Identification
Manual download script:
```
mkdir VoxCeleb1
cd VoxCeleb1
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
cat vox1_dev* > vox1_dev_wav.zip
unzip vox1_dev_wav.zip
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
unzip vox1_test_wav.zip
# download the official SUPERB train-dev-test split
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
```
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification
#### Intent Classification
Manual download requires going through a slow application process, see the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition
#### :warning: Note
These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main
@lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us.
For example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :( |
https://github.com/huggingface/datasets/pull/2884 | Add IC, SI, ER tasks to SUPERB | > @lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us.
> For example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :(
I think there would be a lot of value added if the authors would be willing to host their data on the HF Hub! As an end-user of `datasets`, I've found I'm more likely to explore a dataset if I'm able to quickly pull the subsets without needing a manual download. Perhaps we can tell them that the Hub offers several advantages like versioning and interactive exploration (with `datasets-viewer`)? | This PR adds 3 additional classification tasks to SUPERB
#### Intent Classification
Dataset URL seems to be down at the moment :( See the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands
#### Speaker Identification
Manual download script:
```
mkdir VoxCeleb1
cd VoxCeleb1
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
cat vox1_dev* > vox1_dev_wav.zip
unzip vox1_dev_wav.zip
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
unzip vox1_test_wav.zip
# download the official SUPERB train-dev-test split
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
```
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification
#### Intent Classification
Manual download requires going through a slow application process, see the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition
#### :warning: Note
These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main | 140 | text: Add IC, SI, ER tasks to SUPERB
This PR adds 3 additional classification tasks to SUPERB
#### Intent Classification
Dataset URL seems to be down at the moment :( See the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands
#### Speaker Identification
Manual download script:
```
mkdir VoxCeleb1
cd VoxCeleb1
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
cat vox1_dev* > vox1_dev_wav.zip
unzip vox1_dev_wav.zip
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
unzip vox1_test_wav.zip
# download the official SUPERB train-dev-test split
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
```
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification
#### Intent Classification
Manual download requires going through a slow application process, see the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition
#### :warning: Note
These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main
> @lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us.
> For example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :(
I think there would be a lot of value added if the authors would be willing to host their data on the HF Hub! As an end-user of `datasets`, I've found I'm more likely to explore a dataset if I'm able to quickly pull the subsets without needing a manual download. Perhaps we can tell them that the Hub offers several advantages like versioning and interactive exploration (with `datasets-viewer`)? |
https://github.com/huggingface/datasets/pull/2876 | Extend support for streaming datasets that use pathlib.Path.glob | Thanks, @lhoestq: the idea of adding the mock filesystem is to avoid network calls and reduce testing time ;)
I have added `rglob` as well and fixed some bugs. | This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the method `pathlib.Path.glob`.
Related to #2874, #2866.
CC: @severo | 29 | text: Extend support for streaming datasets that use pathlib.Path.glob
This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the method `pathlib.Path.glob`.
Related to #2874, #2866.
CC: @severo
Thanks, @lhoestq: the idea of adding the mock filesystem is to avoid network calls and reduce testing time ;)
I have added `rglob` as well and fixed some bugs. |
https://github.com/huggingface/datasets/pull/2874 | Support streaming datasets that use pathlib | I've tried https://github.com/huggingface/datasets/issues/2866 again, and I get the same error.
```python
import datasets as ds
ds.load_dataset('counter', split="train", streaming=False)
``` | This PR extends the support in streaming mode for datasets that use `pathlib.Path`.
Related to: #2866.
CC: @severo | 19 | text: Support streaming datasets that use pathlib
This PR extends the support in streaming mode for datasets that use `pathlib.Path`.
Related to: #2866.
CC: @severo
I've tried https://github.com/huggingface/datasets/issues/2866 again, and I get the same error.
```python
import datasets as ds
ds.load_dataset('counter', split="train", streaming=False)
``` |
https://github.com/huggingface/datasets/pull/2874 | Support streaming datasets that use pathlib | @severo Issue #2866 is not fully fixed yet: multiple patches need to be implemented for `pathlib`, as that dataset uses quite a lot of `pathlib` functions... π
| This PR extends the support in streaming mode for datasets that use `pathlib.Path`.
Related to: #2866.
CC: @severo | 27 | text: Support streaming datasets that use pathlib
This PR extends the support in streaming mode for datasets that use `pathlib.Path`.
Related to: #2866.
CC: @severo
@severo Issue #2866 is not fully fixed yet: multiple patches need to be implemented for `pathlib`, as that dataset uses quite a lot of `pathlib` functions... π
|
https://github.com/huggingface/datasets/pull/2874 | Support streaming datasets that use pathlib | No worry and no stress, I just wanted to check for that case :) I'm very happy that you're working on issues I'm interested in! | This PR extends the support in streaming mode for datasets that use `pathlib.Path`.
Related to: #2866.
CC: @severo | 25 | text: Support streaming datasets that use pathlib
This PR extends the support in streaming mode for datasets that use `pathlib.Path`.
Related to: #2866.
CC: @severo
No worry and no stress, I just wanted to check for that case :) I'm very happy that you're working on issues I'm interested in! |
https://github.com/huggingface/datasets/pull/2873 | adding swedish_medical_ner | Hi, what's the current status of this request? It says Changes requested, but I can't see what changes? | Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
Code refactored | 18 | text: adding swedish_medical_ner
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
Code refactored
Hi, what's the current status of this request? It says Changes requested, but I can't see what changes? |
https://github.com/huggingface/datasets/pull/2873 | adding swedish_medical_ner | Hi, it looks like this PR includes changes to other files that `swedish_medical_ner`.
Feel free to remove these changes, or simply create a new PR that only contains the addition of the dataset | Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
Code refactored | 33 | text: adding swedish_medical_ner
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
Code refactored
Hi, it looks like this PR includes changes to other files that `swedish_medical_ner`.
Feel free to remove these changes, or simply create a new PR that only contains the addition of the dataset |
https://github.com/huggingface/datasets/pull/2867 | Add CaSiNo dataset | Hi @lhoestq
Just a request to look at the dataset. Please let me know if any changes are necessary before merging it into the repo. Thank you. | Hi. I request you to add our dataset to the repository.
This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf | 27 | text: Add CaSiNo dataset
Hi. I request you to add our dataset to the repository.
This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf
Hi @lhoestq
Just a request to look at the dataset. Please let me know if any changes are necessary before merging it into the repo. Thank you. |
https://github.com/huggingface/datasets/pull/2867 | Add CaSiNo dataset | Hey @lhoestq
Thanks for merging it. One question: I still cannot find the dataset on https://huggingface.co/datasets. Does it take some time or did I miss something? | Hi. I request you to add our dataset to the repository.
This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf | 26 | text: Add CaSiNo dataset
Hi. I request you to add our dataset to the repository.
This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf
Hey @lhoestq
Thanks for merging it. One question: I still cannot find the dataset on https://huggingface.co/datasets. Does it take some time or did I miss something? |
https://github.com/huggingface/datasets/pull/2867 | Add CaSiNo dataset | Hi ! It takes a few hours or a day for the list of datasets on the website to be updated ;) | Hi. I request you to add our dataset to the repository.
This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf | 22 | text: Add CaSiNo dataset
Hi. I request you to add our dataset to the repository.
This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf
Hi ! It takes a few hours or a day for the list of datasets on the website to be updated ;) |
https://github.com/huggingface/datasets/pull/2865 | Add MultiEURLEX dataset | Hi @lhoestq, we have this new cool multilingual dataset coming at EMNLP 2021. It would be really nice if we could have it in Hugging Face asap. Thanks! | **Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels). | 28 | text: Add MultiEURLEX dataset
**Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels).
Hi @lhoestq, we have this new cool multilingual dataset coming at EMNLP 2021. It would be really nice if we could have it in Hugging Face asap. Thanks! |
https://github.com/huggingface/datasets/pull/2865 | Add MultiEURLEX dataset | Hi @lhoestq, I adopted most of your suggestions:
- Dummy data files reduced, including the 2 smallest documents per subset JSONL.
- README was updated with the publication URL and instructions on how to download and use label descriptors. Excessive newlines were deleted.
I would prefer to keep the label list in a pure format (original ids), to enable people to combine those with more information or possibly in the future explore the dataset, find inconsistencies and fix those to release a new version. | **Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels). | 84 | text: Add MultiEURLEX dataset
**Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels).
Hi @lhoestq, I adopted most of your suggestions:
- Dummy data files reduced, including the 2 smallest documents per subset JSONL.
- README was updated with the publication URL and instructions on how to download and use label descriptors. Excessive newlines were deleted.
I would prefer to keep the label list in a pure format (original ids), to enable people to combine those with more information or possibly in the future explore the dataset, find inconsistencies and fix those to release a new version. |
https://github.com/huggingface/datasets/pull/2865 | Add MultiEURLEX dataset | Thanks for the changes :)
Regarding the labels:
If you use the ClassLabel feature type, the only change is that it will store the ids as integers instead of (currently) string.
The advantage is that if people want to know what id corresponds to which label name, they can use `classlabel.int2str`. It is also the format that helps automate model training for classification in `transformers`.
Let me know if that sounds good to you or if you still want to stick with the labels as they are now. | **Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels). | 88 | text: Add MultiEURLEX dataset
**Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels).
Thanks for the changes :)
Regarding the labels:
If you use the ClassLabel feature type, the only change is that it will store the ids as integers instead of (currently) string.
The advantage is that if people want to know what id corresponds to which label name, they can use `classlabel.int2str`. It is also the format that helps automate model training for classification in `transformers`.
Let me know if that sounds good to you or if you still want to stick with the labels as they are now. |
https://github.com/huggingface/datasets/pull/2865 | Add MultiEURLEX dataset | Hey @lhoestq, thanks for providing this information. This sounds great. I updated my code accordingly to use `ClassLabel`. Could you please provide a minimal example of how `classlabel.int2str` works in practice in my case, where labels are a sequence?
```python
from datasets import load_dataset
dataset = load_dataset('multi_eurlex', 'all_languages')
# Read strs from the labels (list of integers) for the 1st sample of the training split
```
I would like to include this in the README file.
Could you also provide some info on how I could define the supervized key to automate model training, as you said?
Thanks! | **Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels). | 98 | text: Add MultiEURLEX dataset
**Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels).
Hey @lhoestq, thanks for providing this information. This sounds great. I updated my code accordingly to use `ClassLabel`. Could you please provide a minimal example of how `classlabel.int2str` works in practice in my case, where labels are a sequence?
```python
from datasets import load_dataset
dataset = load_dataset('multi_eurlex', 'all_languages')
# Read strs from the labels (list of integers) for the 1st sample of the training split
```
I would like to include this in the README file.
Could you also provide some info on how I could define the supervized key to automate model training, as you said?
Thanks! |
https://github.com/huggingface/datasets/pull/2865 | Add MultiEURLEX dataset | Thanks for the update :)
Here is an example of usage:
```python
from datasets import load_dataset
dataset = load_dataset('multi_eurlex', 'all_languages', split='train')
classlabel = dataset.features["labels"].feature
print(dataset[0]["labels"])
# [1, 20, 7, 3, 0]
print(classlabel.int2str(dataset[0]["labels"]))
# ['100160', '100155', '100158', '100147', '100149']
```
The ClassLabel is simply used to define the `id2label` dictionary of classification models, to make the ids match between the model and the dataset. There nothing more to do :p
I think one last thing to do is just update the `dataset_infos.json` file and we'll be good ! | **Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels). | 87 | text: Add MultiEURLEX dataset
**Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels).
Thanks for the update :)
Here is an example of usage:
```python
from datasets import load_dataset
dataset = load_dataset('multi_eurlex', 'all_languages', split='train')
classlabel = dataset.features["labels"].feature
print(dataset[0]["labels"])
# [1, 20, 7, 3, 0]
print(classlabel.int2str(dataset[0]["labels"]))
# ['100160', '100155', '100158', '100147', '100149']
```
The ClassLabel is simply used to define the `id2label` dictionary of classification models, to make the ids match between the model and the dataset. There nothing more to do :p
I think one last thing to do is just update the `dataset_infos.json` file and we'll be good ! |
https://github.com/huggingface/datasets/pull/2863 | Update dataset URL | Superseded by PR #2864.
@mrm8488 next time you would like to work on an issue, you can first self-assign it to you (by writing `#self-assign` in a comment on the issue). That way, other people can see you are already working on it and there are not multiple people working on the same issue. π | null | 55 | text: Update dataset URL
Superseded by PR #2864.
@mrm8488 next time you would like to work on an issue, you can first self-assign it to you (by writing `#self-assign` in a comment on the issue). That way, other people can see you are already working on it and there are not multiple people working on the same issue. π |
https://github.com/huggingface/datasets/pull/2861 | fix: π be more specific when catching exceptions | To give more context: after our discussion, if I understood properly, you are trying to fix a call to `datasets` that takes 15 minutes: https://github.com/huggingface/datasets-preview-backend/issues/17 Is this right?
| The same specific exception is catched in other parts of the same
function. | 28 | text: fix: π be more specific when catching exceptions
The same specific exception is catched in other parts of the same
function.
To give more context: after our discussion, if I understood properly, you are trying to fix a call to `datasets` that takes 15 minutes: https://github.com/huggingface/datasets-preview-backend/issues/17 Is this right?
|
https://github.com/huggingface/datasets/pull/2861 | fix: π be more specific when catching exceptions | Yes, that's it. And to do that I'm trying to use https://pypi.org/project/stopit/, which will raise a stopit.TimeoutException exception. But currently, if this exception is raised, it's caught and considered as a "FileNotFoundError" while it should not be caught. | The same specific exception is catched in other parts of the same
function. | 38 | text: fix: π be more specific when catching exceptions
The same specific exception is catched in other parts of the same
function.
Yes, that's it. And to do that I'm trying to use https://pypi.org/project/stopit/, which will raise a stopit.TimeoutException exception. But currently, if this exception is raised, it's caught and considered as a "FileNotFoundError" while it should not be caught. |
https://github.com/huggingface/datasets/pull/2861 | fix: π be more specific when catching exceptions | It might be a good idea, but I would have to add a timeout argument to several methods, I'm not sure we want that (I want to ensure all my queries in https://github.com/huggingface/datasets-preview-backend/tree/master/src/datasets_preview_backend/queries resolve in a given time, be it with an error in case of timeout, or with the successful response). The methods are `prepare_module`, `import_main_class`, *builder_cls.*`get_all_exported_dataset_infos`, `load_dataset_builder`, and `load_dataset` | The same specific exception is catched in other parts of the same
function. | 61 | text: fix: π be more specific when catching exceptions
The same specific exception is catched in other parts of the same
function.
It might be a good idea, but I would have to add a timeout argument to several methods, I'm not sure we want that (I want to ensure all my queries in https://github.com/huggingface/datasets-preview-backend/tree/master/src/datasets_preview_backend/queries resolve in a given time, be it with an error in case of timeout, or with the successful response). The methods are `prepare_module`, `import_main_class`, *builder_cls.*`get_all_exported_dataset_infos`, `load_dataset_builder`, and `load_dataset` |
https://github.com/huggingface/datasets/pull/2861 | fix: π be more specific when catching exceptions | I understand, you are trying to find a fix for your use case. OK.
Just note that it is also an issue for `datasets` users. Once #2859 fixed in `datasets`, you will no longer have this issue... | The same specific exception is catched in other parts of the same
function. | 37 | text: fix: π be more specific when catching exceptions
The same specific exception is catched in other parts of the same
function.
I understand, you are trying to find a fix for your use case. OK.
Just note that it is also an issue for `datasets` users. Once #2859 fixed in `datasets`, you will no longer have this issue... |
https://github.com/huggingface/datasets/pull/2861 | fix: π be more specific when catching exceptions | Closing, since 1. my problem is more #2859, and I was asking for that change in order to make a hack work on my side, 2. if we want to change how exceptions are handled, we surely want to do it on all the codebase, not only in this particular case. | The same specific exception is catched in other parts of the same
function. | 51 | text: fix: π be more specific when catching exceptions
The same specific exception is catched in other parts of the same
function.
Closing, since 1. my problem is more #2859, and I was asking for that change in order to make a hack work on my side, 2. if we want to change how exceptions are handled, we surely want to do it on all the codebase, not only in this particular case. |
https://github.com/huggingface/datasets/pull/2954 | Run tests in parallel | There is a speed up in Windows machines:
- From `13m 52s` to `11m 10s`
In Linux machines, some workers crash with error message:
```
OSError: [Errno 12] Cannot allocate memory
``` | Run CI tests in parallel to speed up the test suite. | 32 | text: Run tests in parallel
Run CI tests in parallel to speed up the test suite.
There is a speed up in Windows machines:
- From `13m 52s` to `11m 10s`
In Linux machines, some workers crash with error message:
```
OSError: [Errno 12] Cannot allocate memory
``` |
https://github.com/huggingface/datasets/pull/2954 | Run tests in parallel | There is also a speed up in Linux machines:
- From `7m 30s` to `5m 32s` | Run CI tests in parallel to speed up the test suite. | 16 | text: Run tests in parallel
Run CI tests in parallel to speed up the test suite.
There is also a speed up in Linux machines:
- From `7m 30s` to `5m 32s` |
https://github.com/huggingface/datasets/pull/2951 | Dummy labels no longer on by default in `to_tf_dataset` | @lhoestq Let me make sure we never need it, and if not then I'll remove it entirely in a follow-up PR. | After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway! | 21 | text: Dummy labels no longer on by default in `to_tf_dataset`
After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway!
@lhoestq Let me make sure we never need it, and if not then I'll remove it entirely in a follow-up PR. |
https://github.com/huggingface/datasets/pull/2951 | Dummy labels no longer on by default in `to_tf_dataset` | Thanks ;) it will be less confusing and easier to maintain to not keep unused hacky features | After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway! | 17 | text: Dummy labels no longer on by default in `to_tf_dataset`
After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway!
Thanks ;) it will be less confusing and easier to maintain to not keep unused hacky features |
https://github.com/huggingface/datasets/pull/2942 | Add SEDE dataset | Thanks @albertvillanova for your great suggestions! I just pushed a new commit with the necessary fixes. For some reason, the test `test_metric_common` failed for `meteor` metric, which doesn't have any connection to this PR, so I'm trying to rebase and see if it helps. | This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006 | 44 | text: Add SEDE dataset
This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006
Thanks @albertvillanova for your great suggestions! I just pushed a new commit with the necessary fixes. For some reason, the test `test_metric_common` failed for `meteor` metric, which doesn't have any connection to this PR, so I'm trying to rebase and see if it helps. |
https://github.com/huggingface/datasets/pull/2942 | Add SEDE dataset | Hi @Hazoom,
You were right: the non-passing test had nothing to do with this PR.
Unfortunately, you did a git rebase (instead of a git merge), which is not recommended once you have already opened a pull request because you mess up your pull request history. You can see that your pull request now contains:
- your commits repeated two times
- and commits which are not yours from the master branch
If you would like to clean your pull request, please make:
```
git reset --hard 587b93a
git fetch upstream master
git merge upstream/master
git push --force origin sede
``` | This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006 | 101 | text: Add SEDE dataset
This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006
Hi @Hazoom,
You were right: the non-passing test had nothing to do with this PR.
Unfortunately, you did a git rebase (instead of a git merge), which is not recommended once you have already opened a pull request because you mess up your pull request history. You can see that your pull request now contains:
- your commits repeated two times
- and commits which are not yours from the master branch
If you would like to clean your pull request, please make:
```
git reset --hard 587b93a
git fetch upstream master
git merge upstream/master
git push --force origin sede
``` |
https://github.com/huggingface/datasets/pull/2942 | Add SEDE dataset | > Hi @Hazoom,
>
> You were right: the non-passing test had nothing to do with this PR.
>
> Unfortunately, you did a git rebase (instead of a git merge), which is not recommended once you have already opened a pull request because you mess up your pull request history. You can see that your pull request now contains:
>
> * your commits repeated two times
> * and commits which are not yours from the master branch
>
> If you would like to clean your pull request, please make:
>
> ```
> git reset --hard 587b93a
> git fetch upstream master
> git merge upstream/master
> git push --force origin sede
> ```
Thanks @albertvillanova | This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006 | 120 | text: Add SEDE dataset
This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006
> Hi @Hazoom,
>
> You were right: the non-passing test had nothing to do with this PR.
>
> Unfortunately, you did a git rebase (instead of a git merge), which is not recommended once you have already opened a pull request because you mess up your pull request history. You can see that your pull request now contains:
>
> * your commits repeated two times
> * and commits which are not yours from the master branch
>
> If you would like to clean your pull request, please make:
>
> ```
> git reset --hard 587b93a
> git fetch upstream master
> git merge upstream/master
> git push --force origin sede
> ```
Thanks @albertvillanova |
https://github.com/huggingface/datasets/pull/2942 | Add SEDE dataset | > Nice! Just one final request before approving your pull request:
>
> As you have updated the "QuerySetId" field data type, the size of the dataset is smaller now. You should regenerate the metadata. Please run:
>
> ```
> rm datasets/sede/dataset_infos.json
> datasets-cli test datasets/sede --save_infos --all_configs
> ```
@albertvillanova Good catch, just fixed it. | This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006 | 57 | text: Add SEDE dataset
This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006
> Nice! Just one final request before approving your pull request:
>
> As you have updated the "QuerySetId" field data type, the size of the dataset is smaller now. You should regenerate the metadata. Please run:
>
> ```
> rm datasets/sede/dataset_infos.json
> datasets-cli test datasets/sede --save_infos --all_configs
> ```
@albertvillanova Good catch, just fixed it. |
https://github.com/huggingface/datasets/pull/2938 | Take namespace into account in caching | We might have collisions if a username and a dataset_name are the same. Maybe instead serialize the dataset name by replacing `/` with some string, eg `__SLASH__`, that will hopefully never appear in a dataset or user name (it's what I did in https://github.com/huggingface/datasets-preview-backend/blob/master/benchmark/scripts/serialize.py. That way, all the datasets are one-level deep directories | Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00 | 53 | text: Take namespace into account in caching
Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00
We might have collisions if a username and a dataset_name are the same. Maybe instead serialize the dataset name by replacing `/` with some string, eg `__SLASH__`, that will hopefully never appear in a dataset or user name (it's what I did in https://github.com/huggingface/datasets-preview-backend/blob/master/benchmark/scripts/serialize.py. That way, all the datasets are one-level deep directories |
https://github.com/huggingface/datasets/pull/2938 | Take namespace into account in caching | IIRC we enforce that no repo id or username can contain `___` (exactly 3 underscores) specifically for this reason, so you can use that string (that we use in other projects)
cc @Pierrci | Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00 | 33 | text: Take namespace into account in caching
Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00
IIRC we enforce that no repo id or username can contain `___` (exactly 3 underscores) specifically for this reason, so you can use that string (that we use in other projects)
cc @Pierrci |
https://github.com/huggingface/datasets/pull/2938 | Take namespace into account in caching | > IIRC we enforce that no repo id or username can contain ___ (exactly 3 underscores) specifically for this reason, so you can use that string (that we use in other projects)
out of curiosity: where is it enforced? | Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00 | 39 | text: Take namespace into account in caching
Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00
> IIRC we enforce that no repo id or username can contain ___ (exactly 3 underscores) specifically for this reason, so you can use that string (that we use in other projects)
out of curiosity: where is it enforced? |
https://github.com/huggingface/datasets/pull/2938 | Take namespace into account in caching | > where is it enforced?
Nowhere yet but we should :) feel free to track in internal tracker and/or implement, as this will be useful in the future | Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00 | 28 | text: Take namespace into account in caching
Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00
> where is it enforced?
Nowhere yet but we should :) feel free to track in internal tracker and/or implement, as this will be useful in the future |
https://github.com/huggingface/datasets/pull/2938 | Take namespace into account in caching | Thanks for the trick, I'm doing the change :)
We can use
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files | Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00 | 21 | text: Take namespace into account in caching
Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00
Thanks for the trick, I'm doing the change :)
We can use
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files |
https://github.com/huggingface/datasets/pull/2935 | Add Jigsaw unintended Bias | Note that the tests seem to fail because of a bug in an Exception at the moment, see: https://github.com/huggingface/datasets/pull/2936 for the fix | Hi,
Here's a first attempt at this dataset. Would be great if it could be merged relatively quickly as it is needed for Bigscience-related stuff.
This requires manual download, and I had some trouble generating dummy_data in this setting, so welcoming feedback there. | 22 | text: Add Jigsaw unintended Bias
Hi,
Here's a first attempt at this dataset. Would be great if it could be merged relatively quickly as it is needed for Bigscience-related stuff.
This requires manual download, and I had some trouble generating dummy_data in this setting, so welcoming feedback there.
Note that the tests seem to fail because of a bug in an Exception at the moment, see: https://github.com/huggingface/datasets/pull/2936 for the fix |
https://github.com/huggingface/datasets/pull/2931 | Fix bug in to_tf_dataset | I'm going to merge it, but yeah - hopefully the CI runner just cleans that up automatically and few other people run the tests on Windows anyway! | Replace `set_format()` to `with_format()` so that we don't alter the original dataset in `to_tf_dataset()` | 27 | text: Fix bug in to_tf_dataset
Replace `set_format()` to `with_format()` so that we don't alter the original dataset in `to_tf_dataset()`
I'm going to merge it, but yeah - hopefully the CI runner just cleans that up automatically and few other people run the tests on Windows anyway! |
https://github.com/huggingface/datasets/pull/2925 | Add tutorial for no-code dataset upload | Cool, love it ! :)
Feel free to add a paragraph saying how to load the dataset:
```python
from datasets import load_dataset
dataset = load_dataset("stevhliu/demo")
# or to separate each csv file into several splits
data_files = {"train": "train.csv", "test": "test.csv"}
dataset = load_dataset("stevhliu/demo", data_files=data_files)
print(dataset["train"][0])
``` | This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dataset without accessing the terminal or knowing Git. | 47 | text: Add tutorial for no-code dataset upload
This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dataset without accessing the terminal or knowing Git.
Cool, love it ! :)
Feel free to add a paragraph saying how to load the dataset:
```python
from datasets import load_dataset
dataset = load_dataset("stevhliu/demo")
# or to separate each csv file into several splits
data_files = {"train": "train.csv", "test": "test.csv"}
dataset = load_dataset("stevhliu/demo", data_files=data_files)
print(dataset["train"][0])
``` |
https://github.com/huggingface/datasets/pull/2925 | Add tutorial for no-code dataset upload | Perfect, feel free to mark this PR ready for review :)
cc @albertvillanova do you have any comment ? You can check the tutorial here:
https://47389-250213286-gh.circle-artifacts.com/0/docs/_build/html/no_code_upload.html
Maybe we can just add a list of supported file types:
- csv
- json
- json lines
- text
- parquet | This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dataset without accessing the terminal or knowing Git. | 48 | text: Add tutorial for no-code dataset upload
This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dataset without accessing the terminal or knowing Git.
Perfect, feel free to mark this PR ready for review :)
cc @albertvillanova do you have any comment ? You can check the tutorial here:
https://47389-250213286-gh.circle-artifacts.com/0/docs/_build/html/no_code_upload.html
Maybe we can just add a list of supported file types:
- csv
- json
- json lines
- text
- parquet |
https://github.com/huggingface/datasets/pull/2916 | Add OpenAI's pass@k code evaluation metric | > The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in datasets?
It should work normally, but feel free to test it.
There is some documentation about using metrics in a distributed setup that uses multiprocessing [here](https://huggingface.co/docs/datasets/loading.html?highlight=rank#distributed-setup)
You can test to spawn several processes where each process would load the metric. Then in each process you add some references/predictions to the metric. Finally you call compute() in each process and on process 0 it should return the result on all the references/predictions
Let me know if you have questions or if I can help | This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https://github.com/openai/human-eval) introduced in the [Codex paper](https://arxiv.org/abs/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`/`references` convention.
The addition of this metric should enable the evaluation against the code evaluation datasets added in #2897 and #2893.
A few open questions:
- The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in `datasets`?
- This metric executes generated Python code and as such it poses dangers of executing malicious code. OpenAI addresses this issue by 1) commenting the `exec` call in the code so the user has to actively uncomment it and read the warning and 2) suggests using a sandbox environment (gVisor container). Should we add a similar safeguard? E.g. a prompt that needs to be answered when initialising the metric? Or at least a warning message?
- Naming: the implementation sticks to the `predictions`/`references` naming, however, the references are not reference solutions but unittest to test the solution. While reference solutions are also available they are not used. Should the naming be adapted? | 105 | text: Add OpenAI's pass@k code evaluation metric
This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https://github.com/openai/human-eval) introduced in the [Codex paper](https://arxiv.org/abs/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`/`references` convention.
The addition of this metric should enable the evaluation against the code evaluation datasets added in #2897 and #2893.
A few open questions:
- The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in `datasets`?
- This metric executes generated Python code and as such it poses dangers of executing malicious code. OpenAI addresses this issue by 1) commenting the `exec` call in the code so the user has to actively uncomment it and read the warning and 2) suggests using a sandbox environment (gVisor container). Should we add a similar safeguard? E.g. a prompt that needs to be answered when initialising the metric? Or at least a warning message?
- Naming: the implementation sticks to the `predictions`/`references` naming, however, the references are not reference solutions but unittest to test the solution. While reference solutions are also available they are not used. Should the naming be adapted?
> The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in datasets?
It should work normally, but feel free to test it.
There is some documentation about using metrics in a distributed setup that uses multiprocessing [here](https://huggingface.co/docs/datasets/loading.html?highlight=rank#distributed-setup)
You can test to spawn several processes where each process would load the metric. Then in each process you add some references/predictions to the metric. Finally you call compute() in each process and on process 0 it should return the result on all the references/predictions
Let me know if you have questions or if I can help |
https://github.com/huggingface/datasets/pull/2916 | Add OpenAI's pass@k code evaluation metric | Is there a good way to debug the Windows tests? I suspect it is an issue with `multiprocessing`, but I can't see the error messages. | This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https://github.com/openai/human-eval) introduced in the [Codex paper](https://arxiv.org/abs/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`/`references` convention.
The addition of this metric should enable the evaluation against the code evaluation datasets added in #2897 and #2893.
A few open questions:
- The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in `datasets`?
- This metric executes generated Python code and as such it poses dangers of executing malicious code. OpenAI addresses this issue by 1) commenting the `exec` call in the code so the user has to actively uncomment it and read the warning and 2) suggests using a sandbox environment (gVisor container). Should we add a similar safeguard? E.g. a prompt that needs to be answered when initialising the metric? Or at least a warning message?
- Naming: the implementation sticks to the `predictions`/`references` naming, however, the references are not reference solutions but unittest to test the solution. While reference solutions are also available they are not used. Should the naming be adapted? | 25 | text: Add OpenAI's pass@k code evaluation metric
This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https://github.com/openai/human-eval) introduced in the [Codex paper](https://arxiv.org/abs/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`/`references` convention.
The addition of this metric should enable the evaluation against the code evaluation datasets added in #2897 and #2893.
A few open questions:
- The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in `datasets`?
- This metric executes generated Python code and as such it poses dangers of executing malicious code. OpenAI addresses this issue by 1) commenting the `exec` call in the code so the user has to actively uncomment it and read the warning and 2) suggests using a sandbox environment (gVisor container). Should we add a similar safeguard? E.g. a prompt that needs to be answered when initialising the metric? Or at least a warning message?
- Naming: the implementation sticks to the `predictions`/`references` naming, however, the references are not reference solutions but unittest to test the solution. While reference solutions are also available they are not used. Should the naming be adapted?
Is there a good way to debug the Windows tests? I suspect it is an issue with `multiprocessing`, but I can't see the error messages. |
https://github.com/huggingface/datasets/pull/2906 | feat: πΈ add a function to get a dataset config's split names | > Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos)
Yes totally :) This tutorial should indeed mention this, given how fundamental it is | Also: pass additional arguments (use_auth_token) to get private configs + info of private datasets on the hub
Questions:
- <strike>I'm not sure how the versions work: I changed 1.12.1.dev0 to 1.12.1.dev1, was it correct?</strike> no -> reverted
- Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos) | 28 | text: feat: πΈ add a function to get a dataset config's split names
Also: pass additional arguments (use_auth_token) to get private configs + info of private datasets on the hub
Questions:
- <strike>I'm not sure how the versions work: I changed 1.12.1.dev0 to 1.12.1.dev1, was it correct?</strike> no -> reverted
- Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos)
> Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos)
Yes totally :) This tutorial should indeed mention this, given how fundamental it is |
https://github.com/huggingface/datasets/pull/2897 | Add OpenAI's HumanEval dataset | I just fixed the class name, and added `[More Information Needed]` in empty sections in case people want to complete the dataset card :) | This PR adds OpenAI's [HumanEval](https://github.com/openai/human-eval) dataset. The dataset consists of 164 handcrafted programming problems with solutions and unittests to verify solution. This dataset is useful to evaluate code generation models. | 24 | text: Add OpenAI's HumanEval dataset
This PR adds OpenAI's [HumanEval](https://github.com/openai/human-eval) dataset. The dataset consists of 164 handcrafted programming problems with solutions and unittests to verify solution. This dataset is useful to evaluate code generation models.
I just fixed the class name, and added `[More Information Needed]` in empty sections in case people want to complete the dataset card :) |
https://github.com/huggingface/datasets/pull/2884 | Add IC, SI, ER tasks to SUPERB | Sorry for the late PR, uploading 10+GB files to the hub through a VPN was an adventure :sweat_smile: | This PR adds 3 additional classification tasks to SUPERB
#### Intent Classification
Dataset URL seems to be down at the moment :( See the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands
#### Speaker Identification
Manual download script:
```
mkdir VoxCeleb1
cd VoxCeleb1
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
cat vox1_dev* > vox1_dev_wav.zip
unzip vox1_dev_wav.zip
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
unzip vox1_test_wav.zip
# download the official SUPERB train-dev-test split
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
```
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification
#### Intent Classification
Manual download requires going through a slow application process, see the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition
#### :warning: Note
These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main | 18 | text: Add IC, SI, ER tasks to SUPERB
This PR adds 3 additional classification tasks to SUPERB
#### Intent Classification
Dataset URL seems to be down at the moment :( See the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands
#### Speaker Identification
Manual download script:
```
mkdir VoxCeleb1
cd VoxCeleb1
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
cat vox1_dev* > vox1_dev_wav.zip
unzip vox1_dev_wav.zip
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
unzip vox1_test_wav.zip
# download the official SUPERB train-dev-test split
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
```
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification
#### Intent Classification
Manual download requires going through a slow application process, see the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition
#### :warning: Note
These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main
Sorry for the late PR, uploading 10+GB files to the hub through a VPN was an adventure :sweat_smile: |
https://github.com/huggingface/datasets/pull/2884 | Add IC, SI, ER tasks to SUPERB | Thank you so much for adding these subsets @anton-l!
> These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main
Are we allowed to make these datasets public or would that violate the terms of their use? | This PR adds 3 additional classification tasks to SUPERB
#### Intent Classification
Dataset URL seems to be down at the moment :( See the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands
#### Speaker Identification
Manual download script:
```
mkdir VoxCeleb1
cd VoxCeleb1
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
cat vox1_dev* > vox1_dev_wav.zip
unzip vox1_dev_wav.zip
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
unzip vox1_test_wav.zip
# download the official SUPERB train-dev-test split
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
```
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification
#### Intent Classification
Manual download requires going through a slow application process, see the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition
#### :warning: Note
These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main | 47 | text: Add IC, SI, ER tasks to SUPERB
This PR adds 3 additional classification tasks to SUPERB
#### Intent Classification
Dataset URL seems to be down at the moment :( See the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands
#### Speaker Identification
Manual download script:
```
mkdir VoxCeleb1
cd VoxCeleb1
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
cat vox1_dev* > vox1_dev_wav.zip
unzip vox1_dev_wav.zip
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
unzip vox1_test_wav.zip
# download the official SUPERB train-dev-test split
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
```
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification
#### Intent Classification
Manual download requires going through a slow application process, see the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition
#### :warning: Note
These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main
Thank you so much for adding these subsets @anton-l!
> These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main
Are we allowed to make these datasets public or would that violate the terms of their use? |
https://github.com/huggingface/datasets/pull/2884 | Add IC, SI, ER tasks to SUPERB | @lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us.
For example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :( | This PR adds 3 additional classification tasks to SUPERB
#### Intent Classification
Dataset URL seems to be down at the moment :( See the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands
#### Speaker Identification
Manual download script:
```
mkdir VoxCeleb1
cd VoxCeleb1
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
cat vox1_dev* > vox1_dev_wav.zip
unzip vox1_dev_wav.zip
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
unzip vox1_test_wav.zip
# download the official SUPERB train-dev-test split
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
```
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification
#### Intent Classification
Manual download requires going through a slow application process, see the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition
#### :warning: Note
These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main | 69 | text: Add IC, SI, ER tasks to SUPERB
This PR adds 3 additional classification tasks to SUPERB
#### Intent Classification
Dataset URL seems to be down at the moment :( See the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands
#### Speaker Identification
Manual download script:
```
mkdir VoxCeleb1
cd VoxCeleb1
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
cat vox1_dev* > vox1_dev_wav.zip
unzip vox1_dev_wav.zip
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
unzip vox1_test_wav.zip
# download the official SUPERB train-dev-test split
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
```
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification
#### Intent Classification
Manual download requires going through a slow application process, see the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition
#### :warning: Note
These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main
@lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us.
For example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :( |
https://github.com/huggingface/datasets/pull/2884 | Add IC, SI, ER tasks to SUPERB | > @lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us.
> For example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :(
I think there would be a lot of value added if the authors would be willing to host their data on the HF Hub! As an end-user of `datasets`, I've found I'm more likely to explore a dataset if I'm able to quickly pull the subsets without needing a manual download. Perhaps we can tell them that the Hub offers several advantages like versioning and interactive exploration (with `datasets-viewer`)? | This PR adds 3 additional classification tasks to SUPERB
#### Intent Classification
Dataset URL seems to be down at the moment :( See the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands
#### Speaker Identification
Manual download script:
```
mkdir VoxCeleb1
cd VoxCeleb1
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
cat vox1_dev* > vox1_dev_wav.zip
unzip vox1_dev_wav.zip
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
unzip vox1_test_wav.zip
# download the official SUPERB train-dev-test split
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
```
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification
#### Intent Classification
Manual download requires going through a slow application process, see the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition
#### :warning: Note
These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main | 140 | text: Add IC, SI, ER tasks to SUPERB
This PR adds 3 additional classification tasks to SUPERB
#### Intent Classification
Dataset URL seems to be down at the moment :( See the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands
#### Speaker Identification
Manual download script:
```
mkdir VoxCeleb1
cd VoxCeleb1
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
cat vox1_dev* > vox1_dev_wav.zip
unzip vox1_dev_wav.zip
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
unzip vox1_test_wav.zip
# download the official SUPERB train-dev-test split
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
```
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification
#### Intent Classification
Manual download requires going through a slow application process, see the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition
#### :warning: Note
These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main
> @lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us.
> For example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :(
I think there would be a lot of value added if the authors would be willing to host their data on the HF Hub! As an end-user of `datasets`, I've found I'm more likely to explore a dataset if I'm able to quickly pull the subsets without needing a manual download. Perhaps we can tell them that the Hub offers several advantages like versioning and interactive exploration (with `datasets-viewer`)? |
https://github.com/huggingface/datasets/pull/2876 | Extend support for streaming datasets that use pathlib.Path.glob | Thanks, @lhoestq: the idea of adding the mock filesystem is to avoid network calls and reduce testing time ;)
I have added `rglob` as well and fixed some bugs. | This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the method `pathlib.Path.glob`.
Related to #2874, #2866.
CC: @severo | 29 | text: Extend support for streaming datasets that use pathlib.Path.glob
This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the method `pathlib.Path.glob`.
Related to #2874, #2866.
CC: @severo
Thanks, @lhoestq: the idea of adding the mock filesystem is to avoid network calls and reduce testing time ;)
I have added `rglob` as well and fixed some bugs. |
https://github.com/huggingface/datasets/pull/2874 | Support streaming datasets that use pathlib | I've tried https://github.com/huggingface/datasets/issues/2866 again, and I get the same error.
```python
import datasets as ds
ds.load_dataset('counter', split="train", streaming=False)
``` | This PR extends the support in streaming mode for datasets that use `pathlib.Path`.
Related to: #2866.
CC: @severo | 19 | text: Support streaming datasets that use pathlib
This PR extends the support in streaming mode for datasets that use `pathlib.Path`.
Related to: #2866.
CC: @severo
I've tried https://github.com/huggingface/datasets/issues/2866 again, and I get the same error.
```python
import datasets as ds
ds.load_dataset('counter', split="train", streaming=False)
``` |
https://github.com/huggingface/datasets/pull/2874 | Support streaming datasets that use pathlib | @severo Issue #2866 is not fully fixed yet: multiple patches need to be implemented for `pathlib`, as that dataset uses quite a lot of `pathlib` functions... π
| This PR extends the support in streaming mode for datasets that use `pathlib.Path`.
Related to: #2866.
CC: @severo | 27 | text: Support streaming datasets that use pathlib
This PR extends the support in streaming mode for datasets that use `pathlib.Path`.
Related to: #2866.
CC: @severo
@severo Issue #2866 is not fully fixed yet: multiple patches need to be implemented for `pathlib`, as that dataset uses quite a lot of `pathlib` functions... π
|
https://github.com/huggingface/datasets/pull/2874 | Support streaming datasets that use pathlib | No worry and no stress, I just wanted to check for that case :) I'm very happy that you're working on issues I'm interested in! | This PR extends the support in streaming mode for datasets that use `pathlib.Path`.
Related to: #2866.
CC: @severo | 25 | text: Support streaming datasets that use pathlib
This PR extends the support in streaming mode for datasets that use `pathlib.Path`.
Related to: #2866.
CC: @severo
No worry and no stress, I just wanted to check for that case :) I'm very happy that you're working on issues I'm interested in! |
https://github.com/huggingface/datasets/pull/2873 | adding swedish_medical_ner | Hi, what's the current status of this request? It says Changes requested, but I can't see what changes? | Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
Code refactored | 18 | text: adding swedish_medical_ner
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
Code refactored
Hi, what's the current status of this request? It says Changes requested, but I can't see what changes? |
https://github.com/huggingface/datasets/pull/2873 | adding swedish_medical_ner | Hi, it looks like this PR includes changes to other files that `swedish_medical_ner`.
Feel free to remove these changes, or simply create a new PR that only contains the addition of the dataset | Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
Code refactored | 33 | text: adding swedish_medical_ner
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
Code refactored
Hi, it looks like this PR includes changes to other files that `swedish_medical_ner`.
Feel free to remove these changes, or simply create a new PR that only contains the addition of the dataset |
https://github.com/huggingface/datasets/pull/2867 | Add CaSiNo dataset | Hi @lhoestq
Just a request to look at the dataset. Please let me know if any changes are necessary before merging it into the repo. Thank you. | Hi. I request you to add our dataset to the repository.
This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf | 27 | text: Add CaSiNo dataset
Hi. I request you to add our dataset to the repository.
This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf
Hi @lhoestq
Just a request to look at the dataset. Please let me know if any changes are necessary before merging it into the repo. Thank you. |
https://github.com/huggingface/datasets/pull/2867 | Add CaSiNo dataset | Hey @lhoestq
Thanks for merging it. One question: I still cannot find the dataset on https://huggingface.co/datasets. Does it take some time or did I miss something? | Hi. I request you to add our dataset to the repository.
This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf | 26 | text: Add CaSiNo dataset
Hi. I request you to add our dataset to the repository.
This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf
Hey @lhoestq
Thanks for merging it. One question: I still cannot find the dataset on https://huggingface.co/datasets. Does it take some time or did I miss something? |
https://github.com/huggingface/datasets/pull/2867 | Add CaSiNo dataset | Hi ! It takes a few hours or a day for the list of datasets on the website to be updated ;) | Hi. I request you to add our dataset to the repository.
This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf | 22 | text: Add CaSiNo dataset
Hi. I request you to add our dataset to the repository.
This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf
Hi ! It takes a few hours or a day for the list of datasets on the website to be updated ;) |
https://github.com/huggingface/datasets/pull/2865 | Add MultiEURLEX dataset | Hi @lhoestq, we have this new cool multilingual dataset coming at EMNLP 2021. It would be really nice if we could have it in Hugging Face asap. Thanks! | **Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels). | 28 | text: Add MultiEURLEX dataset
**Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels).
Hi @lhoestq, we have this new cool multilingual dataset coming at EMNLP 2021. It would be really nice if we could have it in Hugging Face asap. Thanks! |
https://github.com/huggingface/datasets/pull/2865 | Add MultiEURLEX dataset | Hi @lhoestq, I adopted most of your suggestions:
- Dummy data files reduced, including the 2 smallest documents per subset JSONL.
- README was updated with the publication URL and instructions on how to download and use label descriptors. Excessive newlines were deleted.
I would prefer to keep the label list in a pure format (original ids), to enable people to combine those with more information or possibly in the future explore the dataset, find inconsistencies and fix those to release a new version. | **Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels). | 84 | text: Add MultiEURLEX dataset
**Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels).
Hi @lhoestq, I adopted most of your suggestions:
- Dummy data files reduced, including the 2 smallest documents per subset JSONL.
- README was updated with the publication URL and instructions on how to download and use label descriptors. Excessive newlines were deleted.
I would prefer to keep the label list in a pure format (original ids), to enable people to combine those with more information or possibly in the future explore the dataset, find inconsistencies and fix those to release a new version. |
https://github.com/huggingface/datasets/pull/2865 | Add MultiEURLEX dataset | Thanks for the changes :)
Regarding the labels:
If you use the ClassLabel feature type, the only change is that it will store the ids as integers instead of (currently) string.
The advantage is that if people want to know what id corresponds to which label name, they can use `classlabel.int2str`. It is also the format that helps automate model training for classification in `transformers`.
Let me know if that sounds good to you or if you still want to stick with the labels as they are now. | **Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels). | 88 | text: Add MultiEURLEX dataset
**Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels).
Thanks for the changes :)
Regarding the labels:
If you use the ClassLabel feature type, the only change is that it will store the ids as integers instead of (currently) string.
The advantage is that if people want to know what id corresponds to which label name, they can use `classlabel.int2str`. It is also the format that helps automate model training for classification in `transformers`.
Let me know if that sounds good to you or if you still want to stick with the labels as they are now. |
https://github.com/huggingface/datasets/pull/2865 | Add MultiEURLEX dataset | Hey @lhoestq, thanks for providing this information. This sounds great. I updated my code accordingly to use `ClassLabel`. Could you please provide a minimal example of how `classlabel.int2str` works in practice in my case, where labels are a sequence?
```python
from datasets import load_dataset
dataset = load_dataset('multi_eurlex', 'all_languages')
# Read strs from the labels (list of integers) for the 1st sample of the training split
```
I would like to include this in the README file.
Could you also provide some info on how I could define the supervized key to automate model training, as you said?
Thanks! | **Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels). | 98 | text: Add MultiEURLEX dataset
**Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels).
Hey @lhoestq, thanks for providing this information. This sounds great. I updated my code accordingly to use `ClassLabel`. Could you please provide a minimal example of how `classlabel.int2str` works in practice in my case, where labels are a sequence?
```python
from datasets import load_dataset
dataset = load_dataset('multi_eurlex', 'all_languages')
# Read strs from the labels (list of integers) for the 1st sample of the training split
```
I would like to include this in the README file.
Could you also provide some info on how I could define the supervized key to automate model training, as you said?
Thanks! |
https://github.com/huggingface/datasets/pull/2865 | Add MultiEURLEX dataset | Thanks for the update :)
Here is an example of usage:
```python
from datasets import load_dataset
dataset = load_dataset('multi_eurlex', 'all_languages', split='train')
classlabel = dataset.features["labels"].feature
print(dataset[0]["labels"])
# [1, 20, 7, 3, 0]
print(classlabel.int2str(dataset[0]["labels"]))
# ['100160', '100155', '100158', '100147', '100149']
```
The ClassLabel is simply used to define the `id2label` dictionary of classification models, to make the ids match between the model and the dataset. There nothing more to do :p
I think one last thing to do is just update the `dataset_infos.json` file and we'll be good ! | **Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels). | 87 | text: Add MultiEURLEX dataset
**Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels).
Thanks for the update :)
Here is an example of usage:
```python
from datasets import load_dataset
dataset = load_dataset('multi_eurlex', 'all_languages', split='train')
classlabel = dataset.features["labels"].feature
print(dataset[0]["labels"])
# [1, 20, 7, 3, 0]
print(classlabel.int2str(dataset[0]["labels"]))
# ['100160', '100155', '100158', '100147', '100149']
```
The ClassLabel is simply used to define the `id2label` dictionary of classification models, to make the ids match between the model and the dataset. There nothing more to do :p
I think one last thing to do is just update the `dataset_infos.json` file and we'll be good ! |
https://github.com/huggingface/datasets/pull/2863 | Update dataset URL | Superseded by PR #2864.
@mrm8488 next time you would like to work on an issue, you can first self-assign it to you (by writing `#self-assign` in a comment on the issue). That way, other people can see you are already working on it and there are not multiple people working on the same issue. π | null | 55 | text: Update dataset URL
Superseded by PR #2864.
@mrm8488 next time you would like to work on an issue, you can first self-assign it to you (by writing `#self-assign` in a comment on the issue). That way, other people can see you are already working on it and there are not multiple people working on the same issue. π |
https://github.com/huggingface/datasets/pull/2861 | fix: π be more specific when catching exceptions | To give more context: after our discussion, if I understood properly, you are trying to fix a call to `datasets` that takes 15 minutes: https://github.com/huggingface/datasets-preview-backend/issues/17 Is this right?
| The same specific exception is catched in other parts of the same
function. | 28 | text: fix: π be more specific when catching exceptions
The same specific exception is catched in other parts of the same
function.
To give more context: after our discussion, if I understood properly, you are trying to fix a call to `datasets` that takes 15 minutes: https://github.com/huggingface/datasets-preview-backend/issues/17 Is this right?
|
https://github.com/huggingface/datasets/pull/2861 | fix: π be more specific when catching exceptions | Yes, that's it. And to do that I'm trying to use https://pypi.org/project/stopit/, which will raise a stopit.TimeoutException exception. But currently, if this exception is raised, it's caught and considered as a "FileNotFoundError" while it should not be caught. | The same specific exception is catched in other parts of the same
function. | 38 | text: fix: π be more specific when catching exceptions
The same specific exception is catched in other parts of the same
function.
Yes, that's it. And to do that I'm trying to use https://pypi.org/project/stopit/, which will raise a stopit.TimeoutException exception. But currently, if this exception is raised, it's caught and considered as a "FileNotFoundError" while it should not be caught. |
https://github.com/huggingface/datasets/pull/2861 | fix: π be more specific when catching exceptions | It might be a good idea, but I would have to add a timeout argument to several methods, I'm not sure we want that (I want to ensure all my queries in https://github.com/huggingface/datasets-preview-backend/tree/master/src/datasets_preview_backend/queries resolve in a given time, be it with an error in case of timeout, or with the successful response). The methods are `prepare_module`, `import_main_class`, *builder_cls.*`get_all_exported_dataset_infos`, `load_dataset_builder`, and `load_dataset` | The same specific exception is catched in other parts of the same
function. | 61 | text: fix: π be more specific when catching exceptions
The same specific exception is catched in other parts of the same
function.
It might be a good idea, but I would have to add a timeout argument to several methods, I'm not sure we want that (I want to ensure all my queries in https://github.com/huggingface/datasets-preview-backend/tree/master/src/datasets_preview_backend/queries resolve in a given time, be it with an error in case of timeout, or with the successful response). The methods are `prepare_module`, `import_main_class`, *builder_cls.*`get_all_exported_dataset_infos`, `load_dataset_builder`, and `load_dataset` |
https://github.com/huggingface/datasets/pull/2861 | fix: π be more specific when catching exceptions | I understand, you are trying to find a fix for your use case. OK.
Just note that it is also an issue for `datasets` users. Once #2859 fixed in `datasets`, you will no longer have this issue... | The same specific exception is catched in other parts of the same
function. | 37 | text: fix: π be more specific when catching exceptions
The same specific exception is catched in other parts of the same
function.
I understand, you are trying to find a fix for your use case. OK.
Just note that it is also an issue for `datasets` users. Once #2859 fixed in `datasets`, you will no longer have this issue... |
https://github.com/huggingface/datasets/pull/2861 | fix: π be more specific when catching exceptions | Closing, since 1. my problem is more #2859, and I was asking for that change in order to make a hack work on my side, 2. if we want to change how exceptions are handled, we surely want to do it on all the codebase, not only in this particular case. | The same specific exception is catched in other parts of the same
function. | 51 | text: fix: π be more specific when catching exceptions
The same specific exception is catched in other parts of the same
function.
Closing, since 1. my problem is more #2859, and I was asking for that change in order to make a hack work on my side, 2. if we want to change how exceptions are handled, we surely want to do it on all the codebase, not only in this particular case. |
https://github.com/huggingface/datasets/pull/2830 | Add imagefolder dataset | @lhoestq @albertvillanova it would be super cool if we could get the Image Classification task to work with this. I'm not sure how to have the dataset find the unique label names _after_ the dataset has been loaded. Is that even possible?
My hacky community version [here](https://huggingface.co/datasets/nateraw/image-folder) does this, but it wouldn't pass the test suite here. Any thoughts? | A generic imagefolder dataset inspired by `torchvision.datasets.ImageFolder`.
Resolves #2508
---
Example Usage:
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/954fa8cba4ff806f6147a782fa9efd1a/imagefolder-official-example.ipynb) | 59 | text: Add imagefolder dataset
A generic imagefolder dataset inspired by `torchvision.datasets.ImageFolder`.
Resolves #2508
---
Example Usage:
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/954fa8cba4ff806f6147a782fa9efd1a/imagefolder-official-example.ipynb)
@lhoestq @albertvillanova it would be super cool if we could get the Image Classification task to work with this. I'm not sure how to have the dataset find the unique label names _after_ the dataset has been loaded. Is that even possible?
My hacky community version [here](https://huggingface.co/datasets/nateraw/image-folder) does this, but it wouldn't pass the test suite here. Any thoughts? |
https://github.com/huggingface/datasets/pull/2830 | Add imagefolder dataset | Hi ! Dataset builders that require some `data_files` like `csv` or `json` are handled differently that actual dataset scripts.
In particular:
- they are placed directly in the `src` folder of the lib so that you can use it without internet connection (more exactly in `src/datasets/packaged_modules/<builder_name>.py`). So feel free to move the dataset python file there. You also need to register it in `src/datasets/packaked_modules.__init__.py`
- they are handled a bit differently in our test suite (see the `PackagedDatasetTest` class in `test_dataset_common.py`). To be able to test the builder with your dummy data, you just need to modify `get_packaged_dataset_dummy_data_files` in `test_dataset_common.py` to return the right `data_files` for your builder. The dummy data can stay in `datasets/image_folder/dummy`
Let me know if you have questions or if I can help ! | A generic imagefolder dataset inspired by `torchvision.datasets.ImageFolder`.
Resolves #2508
---
Example Usage:
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/954fa8cba4ff806f6147a782fa9efd1a/imagefolder-official-example.ipynb) | 128 | text: Add imagefolder dataset
A generic imagefolder dataset inspired by `torchvision.datasets.ImageFolder`.
Resolves #2508
---
Example Usage:
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/954fa8cba4ff806f6147a782fa9efd1a/imagefolder-official-example.ipynb)
Hi ! Dataset builders that require some `data_files` like `csv` or `json` are handled differently that actual dataset scripts.
In particular:
- they are placed directly in the `src` folder of the lib so that you can use it without internet connection (more exactly in `src/datasets/packaged_modules/<builder_name>.py`). So feel free to move the dataset python file there. You also need to register it in `src/datasets/packaked_modules.__init__.py`
- they are handled a bit differently in our test suite (see the `PackagedDatasetTest` class in `test_dataset_common.py`). To be able to test the builder with your dummy data, you just need to modify `get_packaged_dataset_dummy_data_files` in `test_dataset_common.py` to return the right `data_files` for your builder. The dummy data can stay in `datasets/image_folder/dummy`
Let me know if you have questions or if I can help ! |
https://github.com/huggingface/datasets/pull/2830 | Add imagefolder dataset | Hey @lhoestq , I actually already did both of those things. I'm trying to get the `image-classification` task to work now.
For example...When you run `ds = load_dataset('imagefolder', data_files='my_files')`, with a directory called `./my_files` that looks like this:
```
my_files
----| Cat
--------| image1.jpg
--------| ...
----| Dog
--------| image1.jpg
--------| ...
```
...We should set the dataset's `labels` feature to `datasets.features.ClassLabel(names=['cat', 'dog'])` dynamically with class names we find by getting a list of directories in `my_files` (via `data_files`). Otherwise the `datasets.tasks.ImageClassification` task will break, as the `labels` feature is not a `ClassLabel`.
I couldn't figure out how to access the `data_files` in the builder's `_info` function in a way that would pass in the test suite. | A generic imagefolder dataset inspired by `torchvision.datasets.ImageFolder`.
Resolves #2508
---
Example Usage:
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/954fa8cba4ff806f6147a782fa9efd1a/imagefolder-official-example.ipynb) | 117 | text: Add imagefolder dataset
A generic imagefolder dataset inspired by `torchvision.datasets.ImageFolder`.
Resolves #2508
---
Example Usage:
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/954fa8cba4ff806f6147a782fa9efd1a/imagefolder-official-example.ipynb)
Hey @lhoestq , I actually already did both of those things. I'm trying to get the `image-classification` task to work now.
For example...When you run `ds = load_dataset('imagefolder', data_files='my_files')`, with a directory called `./my_files` that looks like this:
```
my_files
----| Cat
--------| image1.jpg
--------| ...
----| Dog
--------| image1.jpg
--------| ...
```
...We should set the dataset's `labels` feature to `datasets.features.ClassLabel(names=['cat', 'dog'])` dynamically with class names we find by getting a list of directories in `my_files` (via `data_files`). Otherwise the `datasets.tasks.ImageClassification` task will break, as the `labels` feature is not a `ClassLabel`.
I couldn't figure out how to access the `data_files` in the builder's `_info` function in a way that would pass in the test suite. |
https://github.com/huggingface/datasets/pull/2830 | Add imagefolder dataset | Nice ! Then maybe you can use `self.config.data_files` in `_info()` ?
What error are you getting in the test suite ?
Also note that `data_files` was first developed to accept paths to actual files, not directories. In particular, it fetches the metadata of all the data_files to get a unique hash for the caching mechanism. So we may need to do a few changes first. | A generic imagefolder dataset inspired by `torchvision.datasets.ImageFolder`.
Resolves #2508
---
Example Usage:
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/954fa8cba4ff806f6147a782fa9efd1a/imagefolder-official-example.ipynb) | 65 | text: Add imagefolder dataset
A generic imagefolder dataset inspired by `torchvision.datasets.ImageFolder`.
Resolves #2508
---
Example Usage:
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/954fa8cba4ff806f6147a782fa9efd1a/imagefolder-official-example.ipynb)
Nice ! Then maybe you can use `self.config.data_files` in `_info()` ?
What error are you getting in the test suite ?
Also note that `data_files` was first developed to accept paths to actual files, not directories. In particular, it fetches the metadata of all the data_files to get a unique hash for the caching mechanism. So we may need to do a few changes first. |
https://github.com/huggingface/datasets/pull/2822 | Add url prefix convention for many compression formats | I just added some documentation about how streaming works with chained URLs.
I will also add some docs about how to use chained URLs directly in `load_dataset` in #2662, since #2662 does change the documentation already and to avoid having to resolve conflicts. | ## Intro
When doing dataset streaming, the uncompression of compressed files is done on the fly using `fsspec`.
In particular, the download manager method `download_and_extract` doesn't return a path to the local download and extracted file, but instead a chained URL so that the uncompression can be done when the file is opened. A few examples of chained URLS:
- `gz://file.txt::https://foo.bar/file.txt.gz`
- `bz2://file.txt::https://foo.bar/file.txt.bz2`
- `zip://::https://foo.bar/archive.zip`
- `tar://::https://foo.bar/archive.tar.gz` (the TAR uncompression includes gz, bz2 etc. uncompression in `fsspec`)
This syntax is highly inspired by the `fsspec` URL chaining syntax from https://filesystem-spec.readthedocs.io/en/latest/features.html#url-chaining
This url prefixing allows `open` to know what kind of uncompression to do in a dataset script when doing
```python
def _generate_examples(self, urlpath):
with open(urlpath) as f:
....
```
## What it changes
This changes the previous behavior from https://github.com/huggingface/datasets/pull/2786 , in which `open` was trying to infer the compression automatically. Infering the compression made it impossible to know whether the user wanted `open` to return compressed data (as the default behavior of the buitin open), or the uncompressed data. By adding uncompression prefixes to the URL, `open` know directly if it has to uncompress or not, and also which protocol to use.
## Additional notes
This PR should close https://github.com/huggingface/datasets/issues/2813
It should also close this PR https://github.com/huggingface/datasets/pull/2811 since the oscar dataset script won't try to uncompress twice anymore
Note that I had to temporarily remove the support for passing tar and zip files to `data_files` for streaming to make it work, since it makes it ambiguous whether a zip file passed as `data_files` should be uncompressed or not. IMO we can make it work again by changing the syntax to make the glob explicit:
```python
load_dataset("json", data_files="zip://*.jsonl::https://foo.bar/archive.zip")
```
This is the exact same convention as fsspec and it removes all ambiguities
cc @albertvillanova @lewtun | 43 | text: Add url prefix convention for many compression formats
## Intro
When doing dataset streaming, the uncompression of compressed files is done on the fly using `fsspec`.
In particular, the download manager method `download_and_extract` doesn't return a path to the local download and extracted file, but instead a chained URL so that the uncompression can be done when the file is opened. A few examples of chained URLS:
- `gz://file.txt::https://foo.bar/file.txt.gz`
- `bz2://file.txt::https://foo.bar/file.txt.bz2`
- `zip://::https://foo.bar/archive.zip`
- `tar://::https://foo.bar/archive.tar.gz` (the TAR uncompression includes gz, bz2 etc. uncompression in `fsspec`)
This syntax is highly inspired by the `fsspec` URL chaining syntax from https://filesystem-spec.readthedocs.io/en/latest/features.html#url-chaining
This url prefixing allows `open` to know what kind of uncompression to do in a dataset script when doing
```python
def _generate_examples(self, urlpath):
with open(urlpath) as f:
....
```
## What it changes
This changes the previous behavior from https://github.com/huggingface/datasets/pull/2786 , in which `open` was trying to infer the compression automatically. Infering the compression made it impossible to know whether the user wanted `open` to return compressed data (as the default behavior of the buitin open), or the uncompressed data. By adding uncompression prefixes to the URL, `open` know directly if it has to uncompress or not, and also which protocol to use.
## Additional notes
This PR should close https://github.com/huggingface/datasets/issues/2813
It should also close this PR https://github.com/huggingface/datasets/pull/2811 since the oscar dataset script won't try to uncompress twice anymore
Note that I had to temporarily remove the support for passing tar and zip files to `data_files` for streaming to make it work, since it makes it ambiguous whether a zip file passed as `data_files` should be uncompressed or not. IMO we can make it work again by changing the syntax to make the glob explicit:
```python
load_dataset("json", data_files="zip://*.jsonl::https://foo.bar/archive.zip")
```
This is the exact same convention as fsspec and it removes all ambiguities
cc @albertvillanova @lewtun
I just added some documentation about how streaming works with chained URLs.
I will also add some docs about how to use chained URLs directly in `load_dataset` in #2662, since #2662 does change the documentation already and to avoid having to resolve conflicts. |
https://github.com/huggingface/datasets/pull/2822 | Add url prefix convention for many compression formats | Merging this one now, next step is resolve the conflicts in #2662 and update the docs for URL chaining :)
There is also the glob feature of zip files that I need to add, to be able to do this for example:
```python
load_dataset("json", data_files="zip://*::https://foo.bar/archive.zip")
``` | ## Intro
When doing dataset streaming, the uncompression of compressed files is done on the fly using `fsspec`.
In particular, the download manager method `download_and_extract` doesn't return a path to the local download and extracted file, but instead a chained URL so that the uncompression can be done when the file is opened. A few examples of chained URLS:
- `gz://file.txt::https://foo.bar/file.txt.gz`
- `bz2://file.txt::https://foo.bar/file.txt.bz2`
- `zip://::https://foo.bar/archive.zip`
- `tar://::https://foo.bar/archive.tar.gz` (the TAR uncompression includes gz, bz2 etc. uncompression in `fsspec`)
This syntax is highly inspired by the `fsspec` URL chaining syntax from https://filesystem-spec.readthedocs.io/en/latest/features.html#url-chaining
This url prefixing allows `open` to know what kind of uncompression to do in a dataset script when doing
```python
def _generate_examples(self, urlpath):
with open(urlpath) as f:
....
```
## What it changes
This changes the previous behavior from https://github.com/huggingface/datasets/pull/2786 , in which `open` was trying to infer the compression automatically. Infering the compression made it impossible to know whether the user wanted `open` to return compressed data (as the default behavior of the buitin open), or the uncompressed data. By adding uncompression prefixes to the URL, `open` know directly if it has to uncompress or not, and also which protocol to use.
## Additional notes
This PR should close https://github.com/huggingface/datasets/issues/2813
It should also close this PR https://github.com/huggingface/datasets/pull/2811 since the oscar dataset script won't try to uncompress twice anymore
Note that I had to temporarily remove the support for passing tar and zip files to `data_files` for streaming to make it work, since it makes it ambiguous whether a zip file passed as `data_files` should be uncompressed or not. IMO we can make it work again by changing the syntax to make the glob explicit:
```python
load_dataset("json", data_files="zip://*.jsonl::https://foo.bar/archive.zip")
```
This is the exact same convention as fsspec and it removes all ambiguities
cc @albertvillanova @lewtun | 46 | text: Add url prefix convention for many compression formats
## Intro
When doing dataset streaming, the uncompression of compressed files is done on the fly using `fsspec`.
In particular, the download manager method `download_and_extract` doesn't return a path to the local download and extracted file, but instead a chained URL so that the uncompression can be done when the file is opened. A few examples of chained URLS:
- `gz://file.txt::https://foo.bar/file.txt.gz`
- `bz2://file.txt::https://foo.bar/file.txt.bz2`
- `zip://::https://foo.bar/archive.zip`
- `tar://::https://foo.bar/archive.tar.gz` (the TAR uncompression includes gz, bz2 etc. uncompression in `fsspec`)
This syntax is highly inspired by the `fsspec` URL chaining syntax from https://filesystem-spec.readthedocs.io/en/latest/features.html#url-chaining
This url prefixing allows `open` to know what kind of uncompression to do in a dataset script when doing
```python
def _generate_examples(self, urlpath):
with open(urlpath) as f:
....
```
## What it changes
This changes the previous behavior from https://github.com/huggingface/datasets/pull/2786 , in which `open` was trying to infer the compression automatically. Infering the compression made it impossible to know whether the user wanted `open` to return compressed data (as the default behavior of the buitin open), or the uncompressed data. By adding uncompression prefixes to the URL, `open` know directly if it has to uncompress or not, and also which protocol to use.
## Additional notes
This PR should close https://github.com/huggingface/datasets/issues/2813
It should also close this PR https://github.com/huggingface/datasets/pull/2811 since the oscar dataset script won't try to uncompress twice anymore
Note that I had to temporarily remove the support for passing tar and zip files to `data_files` for streaming to make it work, since it makes it ambiguous whether a zip file passed as `data_files` should be uncompressed or not. IMO we can make it work again by changing the syntax to make the glob explicit:
```python
load_dataset("json", data_files="zip://*.jsonl::https://foo.bar/archive.zip")
```
This is the exact same convention as fsspec and it removes all ambiguities
cc @albertvillanova @lewtun
Merging this one now, next step is resolve the conflicts in #2662 and update the docs for URL chaining :)
There is also the glob feature of zip files that I need to add, to be able to do this for example:
```python
load_dataset("json", data_files="zip://*::https://foo.bar/archive.zip")
``` |
https://github.com/huggingface/datasets/pull/2819 | Added XL-Sum dataset | Thanks for adding this one ! I just did some minor changes and set the timeout back to 100sec instead of 1000 | Added XL-Sum dataset published in ACL-IJCNLP 2021. (https://aclanthology.org/2021.findings-acl.413/). The default timeout values in `src/datasets/utils/file_utls.py` were increased to enable downloading from the original google drive links. | 22 | text: Added XL-Sum dataset
Added XL-Sum dataset published in ACL-IJCNLP 2021. (https://aclanthology.org/2021.findings-acl.413/). The default timeout values in `src/datasets/utils/file_utls.py` were increased to enable downloading from the original google drive links.
Thanks for adding this one ! I just did some minor changes and set the timeout back to 100sec instead of 1000 |
https://github.com/huggingface/datasets/pull/2819 | Added XL-Sum dataset | > Thanks for adding this one! I just did some minor changes and set the timeout back to 100sec instead of 1000
Thank you for updating the language tags. I tried timeout values up to 300 sec on my local machine, but some of the larger files still get timed out. Although this could have been a network issue on my end, have you verified that 100 sec works for all files? | Added XL-Sum dataset published in ACL-IJCNLP 2021. (https://aclanthology.org/2021.findings-acl.413/). The default timeout values in `src/datasets/utils/file_utls.py` were increased to enable downloading from the original google drive links. | 72 | text: Added XL-Sum dataset
Added XL-Sum dataset published in ACL-IJCNLP 2021. (https://aclanthology.org/2021.findings-acl.413/). The default timeout values in `src/datasets/utils/file_utls.py` were increased to enable downloading from the original google drive links.
> Thanks for adding this one! I just did some minor changes and set the timeout back to 100sec instead of 1000
Thank you for updating the language tags. I tried timeout values up to 300 sec on my local machine, but some of the larger files still get timed out. Although this could have been a network issue on my end, have you verified that 100 sec works for all files? |
https://github.com/huggingface/datasets/pull/2819 | Added XL-Sum dataset | Well the main issue with google drive - even before the time out issues - is that it has a daily quota of downloads per file.
Therefore if many people start downloading this dataset, it will be unavailable until the quota is reset the next day.
So ideally it would be nice if the data were hosted elsewhere than Google drive, to avoid the quota and time out issue.
HF can probably help with hosting the data if needed | Added XL-Sum dataset published in ACL-IJCNLP 2021. (https://aclanthology.org/2021.findings-acl.413/). The default timeout values in `src/datasets/utils/file_utls.py` were increased to enable downloading from the original google drive links. | 79 | text: Added XL-Sum dataset
Added XL-Sum dataset published in ACL-IJCNLP 2021. (https://aclanthology.org/2021.findings-acl.413/). The default timeout values in `src/datasets/utils/file_utls.py` were increased to enable downloading from the original google drive links.
Well the main issue with google drive - even before the time out issues - is that it has a daily quota of downloads per file.
Therefore if many people start downloading this dataset, it will be unavailable until the quota is reset the next day.
So ideally it would be nice if the data were hosted elsewhere than Google drive, to avoid the quota and time out issue.
HF can probably help with hosting the data if needed |
https://github.com/huggingface/datasets/pull/2819 | Added XL-Sum dataset | > Well the main issue with google drive - even before the time out issues - is that it has a daily quota of downloads per file.
> Therefore if many people start downloading this dataset, it will be unavailable until the quota is reset the next day.
>
> So ideally it would be nice if the data were hosted elsewhere than Google drive, to avoid the quota and time out issue.
> HF can probably help with hosting the data if needed
It'd be great if the dataset can be hosted in HF. How should I proceed here though? Upload the dataset files as a community dataset and update the links in this pull request or is there a more straightforward way? | Added XL-Sum dataset published in ACL-IJCNLP 2021. (https://aclanthology.org/2021.findings-acl.413/). The default timeout values in `src/datasets/utils/file_utls.py` were increased to enable downloading from the original google drive links. | 124 | text: Added XL-Sum dataset
Added XL-Sum dataset published in ACL-IJCNLP 2021. (https://aclanthology.org/2021.findings-acl.413/). The default timeout values in `src/datasets/utils/file_utls.py` were increased to enable downloading from the original google drive links.
> Well the main issue with google drive - even before the time out issues - is that it has a daily quota of downloads per file.
> Therefore if many people start downloading this dataset, it will be unavailable until the quota is reset the next day.
>
> So ideally it would be nice if the data were hosted elsewhere than Google drive, to avoid the quota and time out issue.
> HF can probably help with hosting the data if needed
It'd be great if the dataset can be hosted in HF. How should I proceed here though? Upload the dataset files as a community dataset and update the links in this pull request or is there a more straightforward way? |