html_url
stringlengths 47
49
| title
stringlengths 4
111
| comments
stringlengths 71
20.4k
| body
stringlengths 0
12.9k
β | comment_length_in_words
int64 16
1.61k
| text
stringlengths 100
20.5k
|
---|---|---|---|---|---|
https://github.com/huggingface/datasets/pull/2621 | Use prefix to allow exceed Windows MAX_PATH | Nice ! Have you had a chance to test it on a windows machine with the max path limit enabled ? Afaik the CI doesn't have the path limit | By using this prefix, you can exceed the Windows MAX_PATH limit.
See: https://docs.microsoft.com/en-us/windows/win32/fileio/naming-a-file?redirectedfrom=MSDN#win32-file-namespaces
Related to #2524, #2220. | 29 | text: Use prefix to allow exceed Windows MAX_PATH
By using this prefix, you can exceed the Windows MAX_PATH limit.
See: https://docs.microsoft.com/en-us/windows/win32/fileio/naming-a-file?redirectedfrom=MSDN#win32-file-namespaces
Related to #2524, #2220.
Nice ! Have you had a chance to test it on a windows machine with the max path limit enabled ? Afaik the CI doesn't have the path limit |
https://github.com/huggingface/datasets/pull/2620 | Add speech processing tasks | > Are there any `task_categories:automatic-speech-recognition` dataset for which we should update the tags ?
Yes there's a few - I'll fix them tomorrow :) | This PR replaces the `automatic-speech-recognition` task category with a broader `speech-processing` category.
The tasks associated with this category are derived from the [SUPERB benchmark](https://arxiv.org/abs/2105.01051), and ASR is included in this set. | 24 | text: Add speech processing tasks
This PR replaces the `automatic-speech-recognition` task category with a broader `speech-processing` category.
The tasks associated with this category are derived from the [SUPERB benchmark](https://arxiv.org/abs/2105.01051), and ASR is included in this set.
> Are there any `task_categories:automatic-speech-recognition` dataset for which we should update the tags ?
Yes there's a few - I'll fix them tomorrow :) |
https://github.com/huggingface/datasets/pull/2619 | Add ASR task for SUPERB | > Thanks!
>
> One question: aren't you adding `task_templates` to the `_info` method (and to the `dataset_infos.json`?
great catch! i've now added the asr task template (along with a mapping from superb task -> template) and updated the `dataset_infos.json` :) | This PR starts building up the SUPERB benchmark by including the ASR task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/v0.2.0/downstream#asr-automatic-speech-recognition).
Usage:
```python
from datasets import load_dataset
asr = load_dataset("superb", "asr")
# DatasetDict({
# train: Dataset({
# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],
# num_rows: 28539
# })
# validation: Dataset({
# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],
# num_rows: 2703
# })
# test: Dataset({
# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],
# num_rows: 2620
# })
# })
```
I've used the GLUE benchmark as a guide for filling out the README.
To move fast during the evaluation PoC I propose to merge one task at a time, so we can continue building the training / evaluation framework in parallel.
Note: codewise this PR is ready for review - I'll add the missing YAML tags once #2620 is merged :) | 41 | text: Add ASR task for SUPERB
This PR starts building up the SUPERB benchmark by including the ASR task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/v0.2.0/downstream#asr-automatic-speech-recognition).
Usage:
```python
from datasets import load_dataset
asr = load_dataset("superb", "asr")
# DatasetDict({
# train: Dataset({
# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],
# num_rows: 28539
# })
# validation: Dataset({
# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],
# num_rows: 2703
# })
# test: Dataset({
# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],
# num_rows: 2620
# })
# })
```
I've used the GLUE benchmark as a guide for filling out the README.
To move fast during the evaluation PoC I propose to merge one task at a time, so we can continue building the training / evaluation framework in parallel.
Note: codewise this PR is ready for review - I'll add the missing YAML tags once #2620 is merged :)
> Thanks!
>
> One question: aren't you adding `task_templates` to the `_info` method (and to the `dataset_infos.json`?
great catch! i've now added the asr task template (along with a mapping from superb task -> template) and updated the `dataset_infos.json` :) |
https://github.com/huggingface/datasets/pull/2619 | Add ASR task for SUPERB | > Good!
>
> I have a suggested refactoring... Tell me what you think! :)
your approach is much more elegant - i've included your suggestions π | This PR starts building up the SUPERB benchmark by including the ASR task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/v0.2.0/downstream#asr-automatic-speech-recognition).
Usage:
```python
from datasets import load_dataset
asr = load_dataset("superb", "asr")
# DatasetDict({
# train: Dataset({
# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],
# num_rows: 28539
# })
# validation: Dataset({
# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],
# num_rows: 2703
# })
# test: Dataset({
# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],
# num_rows: 2620
# })
# })
```
I've used the GLUE benchmark as a guide for filling out the README.
To move fast during the evaluation PoC I propose to merge one task at a time, so we can continue building the training / evaluation framework in parallel.
Note: codewise this PR is ready for review - I'll add the missing YAML tags once #2620 is merged :) | 27 | text: Add ASR task for SUPERB
This PR starts building up the SUPERB benchmark by including the ASR task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/v0.2.0/downstream#asr-automatic-speech-recognition).
Usage:
```python
from datasets import load_dataset
asr = load_dataset("superb", "asr")
# DatasetDict({
# train: Dataset({
# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],
# num_rows: 28539
# })
# validation: Dataset({
# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],
# num_rows: 2703
# })
# test: Dataset({
# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],
# num_rows: 2620
# })
# })
```
I've used the GLUE benchmark as a guide for filling out the README.
To move fast during the evaluation PoC I propose to merge one task at a time, so we can continue building the training / evaluation framework in parallel.
Note: codewise this PR is ready for review - I'll add the missing YAML tags once #2620 is merged :)
> Good!
>
> I have a suggested refactoring... Tell me what you think! :)
your approach is much more elegant - i've included your suggestions π |
https://github.com/huggingface/datasets/pull/2616 | Support remote data files | @lhoestq maybe we could also use (if available) the ETag of the remote file in `create_config_id`? | Add support for (streaming) remote data files:
```python
data_files = f"https://huggingface.co/datasets/{repo_id}/resolve/main/{relative_file_path}"
ds = load_dataset("json", split="train", data_files=data_files, streaming=True)
```
cc: @thomwolf | 16 | text: Support remote data files
Add support for (streaming) remote data files:
```python
data_files = f"https://huggingface.co/datasets/{repo_id}/resolve/main/{relative_file_path}"
ds = load_dataset("json", split="train", data_files=data_files, streaming=True)
```
cc: @thomwolf
@lhoestq maybe we could also use (if available) the ETag of the remote file in `create_config_id`? |
https://github.com/huggingface/datasets/pull/2616 | Support remote data files | > @lhoestq maybe we could also use (if available) the ETag of the remote file in `create_config_id`?
Sure ! We can get the ETag with
```python
headers = get_authentication_headers_for_url(url, use_auth_token=use_auth_token) # auth for private repos
etag = http_head(url, headers=headers).headers.get("ETag")
```
Since the computation of the `config_id` is done in the `DatasetBuilder.__init__`, then this means that we need to add a new parameter `use_auth_token` in `DatasetBuilder.__init__`
Does that sound good ? We can add this in a following PR | Add support for (streaming) remote data files:
```python
data_files = f"https://huggingface.co/datasets/{repo_id}/resolve/main/{relative_file_path}"
ds = load_dataset("json", split="train", data_files=data_files, streaming=True)
```
cc: @thomwolf | 78 | text: Support remote data files
Add support for (streaming) remote data files:
```python
data_files = f"https://huggingface.co/datasets/{repo_id}/resolve/main/{relative_file_path}"
ds = load_dataset("json", split="train", data_files=data_files, streaming=True)
```
cc: @thomwolf
> @lhoestq maybe we could also use (if available) the ETag of the remote file in `create_config_id`?
Sure ! We can get the ETag with
```python
headers = get_authentication_headers_for_url(url, use_auth_token=use_auth_token) # auth for private repos
etag = http_head(url, headers=headers).headers.get("ETag")
```
Since the computation of the `config_id` is done in the `DatasetBuilder.__init__`, then this means that we need to add a new parameter `use_auth_token` in `DatasetBuilder.__init__`
Does that sound good ? We can add this in a following PR |
https://github.com/huggingface/datasets/pull/2612 | Return Python float instead of numpy.float64 in sklearn metrics | I opened an issue on the `sklearn` repo to understand why `numpy.float64` is the default: https://github.com/scikit-learn/scikit-learn/discussions/20490 | This PR converts the return type of all `sklearn` metrics to be Python `float` instead of `numpy.float64`.
The reason behind this is that our Hub evaluation framework relies on converting benchmark-specific metrics to YAML ([example](https://huggingface.co/datasets/autonlp/autonlp-benchmark-raft-neelalex__raft-test-neelalex__raft-predictions-3/blob/main/README.md#L11)) and the `numpy.float64` format produces garbage like:
```python
import yaml
from datasets import load_metric
metric = load_metric("accuracy")
score = metric.compute(predictions=[0,1], references=[0,1])
print(yaml.dump(score["accuracy"])) # output below
# !!python/object/apply:numpy.core.multiarray.scalar
# - !!python/object/apply:numpy.dtype
# args:
# - f8
# - false
# - true
# state: !!python/tuple
# - 3
# - <
# - null
# - null
# - null
# - -1
# - -1
# - 0
# - !!binary |
# AAAAAAAA8D8=
``` | 16 | text: Return Python float instead of numpy.float64 in sklearn metrics
This PR converts the return type of all `sklearn` metrics to be Python `float` instead of `numpy.float64`.
The reason behind this is that our Hub evaluation framework relies on converting benchmark-specific metrics to YAML ([example](https://huggingface.co/datasets/autonlp/autonlp-benchmark-raft-neelalex__raft-test-neelalex__raft-predictions-3/blob/main/README.md#L11)) and the `numpy.float64` format produces garbage like:
```python
import yaml
from datasets import load_metric
metric = load_metric("accuracy")
score = metric.compute(predictions=[0,1], references=[0,1])
print(yaml.dump(score["accuracy"])) # output below
# !!python/object/apply:numpy.core.multiarray.scalar
# - !!python/object/apply:numpy.dtype
# args:
# - f8
# - false
# - true
# state: !!python/tuple
# - 3
# - <
# - null
# - null
# - null
# - -1
# - -1
# - 0
# - !!binary |
# AAAAAAAA8D8=
```
I opened an issue on the `sklearn` repo to understand why `numpy.float64` is the default: https://github.com/scikit-learn/scikit-learn/discussions/20490 |
https://github.com/huggingface/datasets/pull/2612 | Return Python float instead of numpy.float64 in sklearn metrics | It could be surprising at first to use `tolist()` on numpy scalars but it works ^^ | This PR converts the return type of all `sklearn` metrics to be Python `float` instead of `numpy.float64`.
The reason behind this is that our Hub evaluation framework relies on converting benchmark-specific metrics to YAML ([example](https://huggingface.co/datasets/autonlp/autonlp-benchmark-raft-neelalex__raft-test-neelalex__raft-predictions-3/blob/main/README.md#L11)) and the `numpy.float64` format produces garbage like:
```python
import yaml
from datasets import load_metric
metric = load_metric("accuracy")
score = metric.compute(predictions=[0,1], references=[0,1])
print(yaml.dump(score["accuracy"])) # output below
# !!python/object/apply:numpy.core.multiarray.scalar
# - !!python/object/apply:numpy.dtype
# args:
# - f8
# - false
# - true
# state: !!python/tuple
# - 3
# - <
# - null
# - null
# - null
# - -1
# - -1
# - 0
# - !!binary |
# AAAAAAAA8D8=
``` | 16 | text: Return Python float instead of numpy.float64 in sklearn metrics
This PR converts the return type of all `sklearn` metrics to be Python `float` instead of `numpy.float64`.
The reason behind this is that our Hub evaluation framework relies on converting benchmark-specific metrics to YAML ([example](https://huggingface.co/datasets/autonlp/autonlp-benchmark-raft-neelalex__raft-test-neelalex__raft-predictions-3/blob/main/README.md#L11)) and the `numpy.float64` format produces garbage like:
```python
import yaml
from datasets import load_metric
metric = load_metric("accuracy")
score = metric.compute(predictions=[0,1], references=[0,1])
print(yaml.dump(score["accuracy"])) # output below
# !!python/object/apply:numpy.core.multiarray.scalar
# - !!python/object/apply:numpy.dtype
# args:
# - f8
# - false
# - true
# state: !!python/tuple
# - 3
# - <
# - null
# - null
# - null
# - -1
# - -1
# - 0
# - !!binary |
# AAAAAAAA8D8=
```
It could be surprising at first to use `tolist()` on numpy scalars but it works ^^ |
https://github.com/huggingface/datasets/pull/2589 | Support multilabel metrics | Hi ! Thanks for the fix :)
If I understand correctly, `OptionalSequence` doesn't have an associated arrow type that we know in advance unlike the other feature types, because it depends on the type of the examples.
For example, I tested this and it raises an error:
```python
import datasets as ds
import pyarrow as pa
features = ds.Features({"a": ds.features.OptionalSequence(ds.Value("int32"))})
batch = {"a": [[0]]}
writer = ds.ArrowWriter(features=features, stream=pa.BufferOutputStream())
writer.write_batch(batch)
# ArrowInvalid: Could not convert [0] with type list: tried to convert to int
```
This error happens because `features.type` is `StructType(struct<a: int32>)`.
Another way to add support for multilabel would be to have several configurations for these metrics. By default it would set the features without sequences, and for the multi label configuration it would use features with sequences. Let me know what you think | Currently, multilabel metrics are not supported because `predictions` and `references` are defined as `Value("int32")`.
This PR creates a new feature type `OptionalSequence` which can act as either `Value("int32")` or `Sequence(Value("int32"))`, depending on the data passed.
Close #2554. | 135 | text: Support multilabel metrics
Currently, multilabel metrics are not supported because `predictions` and `references` are defined as `Value("int32")`.
This PR creates a new feature type `OptionalSequence` which can act as either `Value("int32")` or `Sequence(Value("int32"))`, depending on the data passed.
Close #2554.
Hi ! Thanks for the fix :)
If I understand correctly, `OptionalSequence` doesn't have an associated arrow type that we know in advance unlike the other feature types, because it depends on the type of the examples.
For example, I tested this and it raises an error:
```python
import datasets as ds
import pyarrow as pa
features = ds.Features({"a": ds.features.OptionalSequence(ds.Value("int32"))})
batch = {"a": [[0]]}
writer = ds.ArrowWriter(features=features, stream=pa.BufferOutputStream())
writer.write_batch(batch)
# ArrowInvalid: Could not convert [0] with type list: tried to convert to int
```
This error happens because `features.type` is `StructType(struct<a: int32>)`.
Another way to add support for multilabel would be to have several configurations for these metrics. By default it would set the features without sequences, and for the multi label configuration it would use features with sequences. Let me know what you think |
https://github.com/huggingface/datasets/pull/2589 | Support multilabel metrics | Hi @lhoestq, thanks for your feedback :)
Definitely, your suggested approach is simpler. I am going to refactor all my PR unless we could envision some other use cases where an OptionalSequence might be convenient, but for now I can't think of any... | Currently, multilabel metrics are not supported because `predictions` and `references` are defined as `Value("int32")`.
This PR creates a new feature type `OptionalSequence` which can act as either `Value("int32")` or `Sequence(Value("int32"))`, depending on the data passed.
Close #2554. | 43 | text: Support multilabel metrics
Currently, multilabel metrics are not supported because `predictions` and `references` are defined as `Value("int32")`.
This PR creates a new feature type `OptionalSequence` which can act as either `Value("int32")` or `Sequence(Value("int32"))`, depending on the data passed.
Close #2554.
Hi @lhoestq, thanks for your feedback :)
Definitely, your suggested approach is simpler. I am going to refactor all my PR unless we could envision some other use cases where an OptionalSequence might be convenient, but for now I can't think of any... |
https://github.com/huggingface/datasets/pull/2582 | Add skip and take | @lhoestq looks good. I tried with https://huggingface.co/datasets/vblagoje/wikipedia_snippets_streamed and it worked nicely. I would add more unit tests for edge cases. What happens if the n is larger than the total number of samples? Just to make sure these cases are handled properly. | As discussed in https://github.com/huggingface/datasets/pull/2375#discussion_r657084544 I added the `IterableDataset.skip` and `IterableDataset.take` methods that allows to do basic splitting of iterable datasets.
You can create new dataset with the first `n` examples using `IterableDataset.take()`, or you can get a dataset with the rest of the examples by skipping the first `n` examples with `IterableDataset.skip()`
One implementation detail:
Using `take` (or `skip`) prevents future dataset shuffling from shuffling the dataset shards, otherwise the taken examples could come from other shards. In this case it only uses the shuffle buffer.
I would have loved to allow the shards of the taken examples to be shuffled anyway, but since we don't know in advance the length of each shard we don't know what shards to take or skip.
I think this is ok though since users can shuffle before doing take or skip. I mentioned this in the documentation
cc @vblagoje @lewtun | 42 | text: Add skip and take
As discussed in https://github.com/huggingface/datasets/pull/2375#discussion_r657084544 I added the `IterableDataset.skip` and `IterableDataset.take` methods that allows to do basic splitting of iterable datasets.
You can create new dataset with the first `n` examples using `IterableDataset.take()`, or you can get a dataset with the rest of the examples by skipping the first `n` examples with `IterableDataset.skip()`
One implementation detail:
Using `take` (or `skip`) prevents future dataset shuffling from shuffling the dataset shards, otherwise the taken examples could come from other shards. In this case it only uses the shuffle buffer.
I would have loved to allow the shards of the taken examples to be shuffled anyway, but since we don't know in advance the length of each shard we don't know what shards to take or skip.
I think this is ok though since users can shuffle before doing take or skip. I mentioned this in the documentation
cc @vblagoje @lewtun
@lhoestq looks good. I tried with https://huggingface.co/datasets/vblagoje/wikipedia_snippets_streamed and it worked nicely. I would add more unit tests for edge cases. What happens if the n is larger than the total number of samples? Just to make sure these cases are handled properly. |
https://github.com/huggingface/datasets/pull/2582 | Add skip and take | Yup I'll add the tests thanks ;)
Moreover, I just noticed something in your wiki snippets code. FYI you're using `++passage_counter ` at https://huggingface.co/datasets/vblagoje/wikipedia_snippets_streamed/blob/main/wikipedia_snippets_streamed.py#L102 but in python this doesn't increment the value @vblagoje | As discussed in https://github.com/huggingface/datasets/pull/2375#discussion_r657084544 I added the `IterableDataset.skip` and `IterableDataset.take` methods that allows to do basic splitting of iterable datasets.
You can create new dataset with the first `n` examples using `IterableDataset.take()`, or you can get a dataset with the rest of the examples by skipping the first `n` examples with `IterableDataset.skip()`
One implementation detail:
Using `take` (or `skip`) prevents future dataset shuffling from shuffling the dataset shards, otherwise the taken examples could come from other shards. In this case it only uses the shuffle buffer.
I would have loved to allow the shards of the taken examples to be shuffled anyway, but since we don't know in advance the length of each shard we don't know what shards to take or skip.
I think this is ok though since users can shuffle before doing take or skip. I mentioned this in the documentation
cc @vblagoje @lewtun | 33 | text: Add skip and take
As discussed in https://github.com/huggingface/datasets/pull/2375#discussion_r657084544 I added the `IterableDataset.skip` and `IterableDataset.take` methods that allows to do basic splitting of iterable datasets.
You can create new dataset with the first `n` examples using `IterableDataset.take()`, or you can get a dataset with the rest of the examples by skipping the first `n` examples with `IterableDataset.skip()`
One implementation detail:
Using `take` (or `skip`) prevents future dataset shuffling from shuffling the dataset shards, otherwise the taken examples could come from other shards. In this case it only uses the shuffle buffer.
I would have loved to allow the shards of the taken examples to be shuffled anyway, but since we don't know in advance the length of each shard we don't know what shards to take or skip.
I think this is ok though since users can shuffle before doing take or skip. I mentioned this in the documentation
cc @vblagoje @lewtun
Yup I'll add the tests thanks ;)
Moreover, I just noticed something in your wiki snippets code. FYI you're using `++passage_counter ` at https://huggingface.co/datasets/vblagoje/wikipedia_snippets_streamed/blob/main/wikipedia_snippets_streamed.py#L102 but in python this doesn't increment the value @vblagoje |
https://github.com/huggingface/datasets/pull/2578 | Support Zstandard compressed files | > What if people want to run some tests without having zstandard ?
> Usually what we do is add a decorator @require_zstandard for example
@lhoestq I think I'm missing something here...
Tests are a *development* tool (to ensure we deliver a good quality lib), not something we offer to the end users of the lib. Users of the lib just `pip install datasets` and no tests are delivered with the lib (`tests` directory is outside the `src` code dir).
On the contrary, developers (contributors) of the lib do need to be able to run tests (TDD). And because of that, they are required to install datasets differently: `pip install -e .[dev]`, so that all required developing (and testing) dependencies are properly installed (included `zstandard`).
Apart from `zsatandard`, there are many other dev/test required dependencies for running tests, and we do not have a `@require_toto` for each and every of these dependencies in our tests:
- `pytest` and `absl-py` (they are not dependencies in install_requires, but only in TEST_REQUIRE extras_require),
- `boto3` (in test_filesystem.py),
- `seqeval` (in test_metric_common.py),
- `bs4` (used by eli5 and tested in test_hf_gcp.py)
- ...
So IMHO, to run tests you should previously install datasets with dev or tests dependencies: either `pip install -e .[dev]` or `pip install -e .[tests]` (the latter to be used in CI testing-only part of the development cycle). And the tests should be written accordingly, assuming all tests dependencies are installed. | Close #2572.
cc: @thomwolf | 240 | text: Support Zstandard compressed files
Close #2572.
cc: @thomwolf
> What if people want to run some tests without having zstandard ?
> Usually what we do is add a decorator @require_zstandard for example
@lhoestq I think I'm missing something here...
Tests are a *development* tool (to ensure we deliver a good quality lib), not something we offer to the end users of the lib. Users of the lib just `pip install datasets` and no tests are delivered with the lib (`tests` directory is outside the `src` code dir).
On the contrary, developers (contributors) of the lib do need to be able to run tests (TDD). And because of that, they are required to install datasets differently: `pip install -e .[dev]`, so that all required developing (and testing) dependencies are properly installed (included `zstandard`).
Apart from `zsatandard`, there are many other dev/test required dependencies for running tests, and we do not have a `@require_toto` for each and every of these dependencies in our tests:
- `pytest` and `absl-py` (they are not dependencies in install_requires, but only in TEST_REQUIRE extras_require),
- `boto3` (in test_filesystem.py),
- `seqeval` (in test_metric_common.py),
- `bs4` (used by eli5 and tested in test_hf_gcp.py)
- ...
So IMHO, to run tests you should previously install datasets with dev or tests dependencies: either `pip install -e .[dev]` or `pip install -e .[tests]` (the latter to be used in CI testing-only part of the development cycle). And the tests should be written accordingly, assuming all tests dependencies are installed. |
https://github.com/huggingface/datasets/pull/2578 | Support Zstandard compressed files | Hi !
I was saying that because the other dependencies you mentioned are only required for _some_ tests. While here zstd is required for _all_ tests since it's imported in the conftest.py
Feel free to keep it as it is right now, or maybe move the fixture to test_file_utils.py to allow users without zstd to run tests for their builders, dataset card etc. without issues | Close #2572.
cc: @thomwolf | 65 | text: Support Zstandard compressed files
Close #2572.
cc: @thomwolf
Hi !
I was saying that because the other dependencies you mentioned are only required for _some_ tests. While here zstd is required for _all_ tests since it's imported in the conftest.py
Feel free to keep it as it is right now, or maybe move the fixture to test_file_utils.py to allow users without zstd to run tests for their builders, dataset card etc. without issues |
https://github.com/huggingface/datasets/pull/2578 | Support Zstandard compressed files | @lhoestq does this mean that the pile could have streaming support in the future? Afaik streaming doesnt support zstandard compressed type | Close #2572.
cc: @thomwolf | 21 | text: Support Zstandard compressed files
Close #2572.
cc: @thomwolf
@lhoestq does this mean that the pile could have streaming support in the future? Afaik streaming doesnt support zstandard compressed type |
https://github.com/huggingface/datasets/pull/2578 | Support Zstandard compressed files | > @lhoestq does this mean that the pile could have streaming support in the future? Afaik streaming doesnt support zstandard compressed type
just for reference, i tried to stream one of the `.zst` files from [the pile](https://the-eye.eu/public/AI/pile/) using
```python
data_files = ["https://the-eye.eu/public/AI/pile/train/00.jsonl.zst"]
streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True)
```
and got the following error:
```
Using custom data configuration default-4e71acadc389c254
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
/tmp/ipykernel_1187680/10848115.py in <module>
1 data_files = ["https://the-eye.eu/public/AI/pile/train/00.jsonl.zst"]
2
----> 3 streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True)
4
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
835 # this extends the open and os.path.join functions for data streaming
836 extend_module_for_streaming(builder_instance.__module__, use_auth_token=use_auth_token)
--> 837 return builder_instance.as_streaming_dataset(
838 split=split,
839 use_auth_token=use_auth_token,
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/builder.py in as_streaming_dataset(self, split, base_path, use_auth_token)
922 data_dir=self.config.data_dir,
923 )
--> 924 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
925 # By default, return all splits
926 if split is None:
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py in _split_generators(self, dl_manager)
50 if not self.config.data_files:
51 raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}")
---> 52 data_files = dl_manager.download_and_extract(self.config.data_files)
53 if isinstance(data_files, (str, list, tuple)):
54 files = data_files
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in download_and_extract(self, url_or_urls)
140
141 def download_and_extract(self, url_or_urls):
--> 142 return self.extract(self.download(url_or_urls))
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in extract(self, path_or_paths)
115
116 def extract(self, path_or_paths):
--> 117 urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)
118 return urlpaths
119
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)
202 num_proc = 1
203 if num_proc <= 1 or len(iterable) <= num_proc:
--> 204 mapped = [
205 _single_map_nested((function, obj, types, None, True))
206 for obj in utils.tqdm(iterable, disable=disable_tqdm)
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)
203 if num_proc <= 1 or len(iterable) <= num_proc:
204 mapped = [
--> 205 _single_map_nested((function, obj, types, None, True))
206 for obj in utils.tqdm(iterable, disable=disable_tqdm)
207 ]
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)
141 # Singleton first to spare some computation
142 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 143 return function(data_struct)
144
145 # Reduce logging to keep things readable in multiprocessing with tqdm
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _extract(self, urlpath)
119
120 def _extract(self, urlpath):
--> 121 protocol = self._get_extraction_protocol(urlpath)
122 if protocol is None:
123 # no extraction
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol(self, urlpath)
137 elif path.endswith(".zip"):
138 return "zip"
--> 139 raise NotImplementedError(f"Extraction protocol for file at {urlpath} is not implemented yet")
140
141 def download_and_extract(self, url_or_urls):
NotImplementedError: Extraction protocol for file at https://the-eye.eu/public/AI/pile/train/00.jsonl.zst is not implemented yet
```
i'm not sure whether @Shashi456 is referring to a fundamental limitation with "streaming" zstandard compression files or simply that we need to support the protocol in the streaming api of `datasets`
| Close #2572.
cc: @thomwolf | 429 | text: Support Zstandard compressed files
Close #2572.
cc: @thomwolf
> @lhoestq does this mean that the pile could have streaming support in the future? Afaik streaming doesnt support zstandard compressed type
just for reference, i tried to stream one of the `.zst` files from [the pile](https://the-eye.eu/public/AI/pile/) using
```python
data_files = ["https://the-eye.eu/public/AI/pile/train/00.jsonl.zst"]
streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True)
```
and got the following error:
```
Using custom data configuration default-4e71acadc389c254
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
/tmp/ipykernel_1187680/10848115.py in <module>
1 data_files = ["https://the-eye.eu/public/AI/pile/train/00.jsonl.zst"]
2
----> 3 streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True)
4
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
835 # this extends the open and os.path.join functions for data streaming
836 extend_module_for_streaming(builder_instance.__module__, use_auth_token=use_auth_token)
--> 837 return builder_instance.as_streaming_dataset(
838 split=split,
839 use_auth_token=use_auth_token,
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/builder.py in as_streaming_dataset(self, split, base_path, use_auth_token)
922 data_dir=self.config.data_dir,
923 )
--> 924 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
925 # By default, return all splits
926 if split is None:
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py in _split_generators(self, dl_manager)
50 if not self.config.data_files:
51 raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}")
---> 52 data_files = dl_manager.download_and_extract(self.config.data_files)
53 if isinstance(data_files, (str, list, tuple)):
54 files = data_files
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in download_and_extract(self, url_or_urls)
140
141 def download_and_extract(self, url_or_urls):
--> 142 return self.extract(self.download(url_or_urls))
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in extract(self, path_or_paths)
115
116 def extract(self, path_or_paths):
--> 117 urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)
118 return urlpaths
119
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)
202 num_proc = 1
203 if num_proc <= 1 or len(iterable) <= num_proc:
--> 204 mapped = [
205 _single_map_nested((function, obj, types, None, True))
206 for obj in utils.tqdm(iterable, disable=disable_tqdm)
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)
203 if num_proc <= 1 or len(iterable) <= num_proc:
204 mapped = [
--> 205 _single_map_nested((function, obj, types, None, True))
206 for obj in utils.tqdm(iterable, disable=disable_tqdm)
207 ]
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)
141 # Singleton first to spare some computation
142 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 143 return function(data_struct)
144
145 # Reduce logging to keep things readable in multiprocessing with tqdm
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _extract(self, urlpath)
119
120 def _extract(self, urlpath):
--> 121 protocol = self._get_extraction_protocol(urlpath)
122 if protocol is None:
123 # no extraction
~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol(self, urlpath)
137 elif path.endswith(".zip"):
138 return "zip"
--> 139 raise NotImplementedError(f"Extraction protocol for file at {urlpath} is not implemented yet")
140
141 def download_and_extract(self, url_or_urls):
NotImplementedError: Extraction protocol for file at https://the-eye.eu/public/AI/pile/train/00.jsonl.zst is not implemented yet
```
i'm not sure whether @Shashi456 is referring to a fundamental limitation with "streaming" zstandard compression files or simply that we need to support the protocol in the streaming api of `datasets`
|
https://github.com/huggingface/datasets/pull/2578 | Support Zstandard compressed files | @lewtun our streaming mode patches the Python `open` function. I could have a look tomorrow if it is easily implementable for this case. | Close #2572.
cc: @thomwolf | 23 | text: Support Zstandard compressed files
Close #2572.
cc: @thomwolf
@lewtun our streaming mode patches the Python `open` function. I could have a look tomorrow if it is easily implementable for this case. |
https://github.com/huggingface/datasets/pull/2578 | Support Zstandard compressed files | @lewtun, I have tested and yes, it is easily implementable. I've created a draft Pull Request with an implementation proposal: #2786. | Close #2572.
cc: @thomwolf | 21 | text: Support Zstandard compressed files
Close #2572.
cc: @thomwolf
@lewtun, I have tested and yes, it is easily implementable. I've created a draft Pull Request with an implementation proposal: #2786. |
https://github.com/huggingface/datasets/pull/2565 | Inject templates for ASR datasets | thanks for the feedback @lhoestq! i've added the new language codes and this PR should be ready for a merge :) | This PR adds ASR templates for 5 of the most common speech datasets on the Hub, where "common" is defined by the number of models trained on them.
I also fixed a bunch of the tags in the READMEs π | 21 | text: Inject templates for ASR datasets
This PR adds ASR templates for 5 of the most common speech datasets on the Hub, where "common" is defined by the number of models trained on them.
I also fixed a bunch of the tags in the READMEs π
thanks for the feedback @lhoestq! i've added the new language codes and this PR should be ready for a merge :) |
https://github.com/huggingface/datasets/pull/2560 | fix Dataset.map when num_procs > num rows | Hi ! Thanks for fixing this :)
Looks like you have tons of changes due to code formatting.
We're using `black` for this, with a custom line length. To run our code formatting, you just need to run
```
make style
```
Then for the windows error in the CI, I'm looking into it. It's probably just a file that isn't properly closed | closes #2470
## Testing notes
To run updated tests:
```sh
pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s
```
With Python code (to view warning):
```python
from datasets import Dataset
dataset = Dataset.from_dict({"x": ["sample"]})
print(len(dataset))
dataset.map(lambda x: x, num_proc=10)
``` | 63 | text: fix Dataset.map when num_procs > num rows
closes #2470
## Testing notes
To run updated tests:
```sh
pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s
```
With Python code (to view warning):
```python
from datasets import Dataset
dataset = Dataset.from_dict({"x": ["sample"]})
print(len(dataset))
dataset.map(lambda x: x, num_proc=10)
```
Hi ! Thanks for fixing this :)
Looks like you have tons of changes due to code formatting.
We're using `black` for this, with a custom line length. To run our code formatting, you just need to run
```
make style
```
Then for the windows error in the CI, I'm looking into it. It's probably just a file that isn't properly closed |
https://github.com/huggingface/datasets/pull/2560 | fix Dataset.map when num_procs > num rows | CI is all green now ! Thanks :)
There are still many code formatting changes in your PR - probably due to the first commit you did.
To avoid conflicts with future PRs it would be nice to only have the changes related to the `num_proc` warning, and not have all those code formatting changes,
Could you try remove those code formatting changes ?
If it's easier for you, you can make a new branch from `master` if needed | closes #2470
## Testing notes
To run updated tests:
```sh
pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s
```
With Python code (to view warning):
```python
from datasets import Dataset
dataset = Dataset.from_dict({"x": ["sample"]})
print(len(dataset))
dataset.map(lambda x: x, num_proc=10)
``` | 79 | text: fix Dataset.map when num_procs > num rows
closes #2470
## Testing notes
To run updated tests:
```sh
pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s
```
With Python code (to view warning):
```python
from datasets import Dataset
dataset = Dataset.from_dict({"x": ["sample"]})
print(len(dataset))
dataset.map(lambda x: x, num_proc=10)
```
CI is all green now ! Thanks :)
There are still many code formatting changes in your PR - probably due to the first commit you did.
To avoid conflicts with future PRs it would be nice to only have the changes related to the `num_proc` warning, and not have all those code formatting changes,
Could you try remove those code formatting changes ?
If it's easier for you, you can make a new branch from `master` if needed |
https://github.com/huggingface/datasets/pull/2560 | fix Dataset.map when num_procs > num rows | Thanks, @lhoestq! Apologies for the half-baked commits yesterday! I wasnβt able to step back in to resolve those CI issues until this morning.
Also, Iβm surprised that `make style` isnβt resolving the formatting changes. Iβm a bit stumped on that, so Iβm going to re-apply on a new branch and open a PR as you suggested. | closes #2470
## Testing notes
To run updated tests:
```sh
pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s
```
With Python code (to view warning):
```python
from datasets import Dataset
dataset = Dataset.from_dict({"x": ["sample"]})
print(len(dataset))
dataset.map(lambda x: x, num_proc=10)
``` | 56 | text: fix Dataset.map when num_procs > num rows
closes #2470
## Testing notes
To run updated tests:
```sh
pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s
```
With Python code (to view warning):
```python
from datasets import Dataset
dataset = Dataset.from_dict({"x": ["sample"]})
print(len(dataset))
dataset.map(lambda x: x, num_proc=10)
```
Thanks, @lhoestq! Apologies for the half-baked commits yesterday! I wasnβt able to step back in to resolve those CI issues until this morning.
Also, Iβm surprised that `make style` isnβt resolving the formatting changes. Iβm a bit stumped on that, so Iβm going to re-apply on a new branch and open a PR as you suggested. |
https://github.com/huggingface/datasets/pull/2541 | update discofuse link cc @ekQ | The CI is failing because the dataset tags for `discofuse` are missing. I'm merging this PR since this is unrelated to this PR, but feel free to open another PR to add the tags here if you have some time:
https://github.com/huggingface/datasets/blob/19408f9fab85c79b966085574cd2da3b90959179/datasets/discofuse/README.md#L1-L5
The missing tags are:
```
'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'pretty_name', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
```
Thanks again ! | Updating the discofuse link: https://github.com/google-research-datasets/discofuse/commit/fd4b120cb3dd19a417e7f3b5432010b574b5eeee | 60 | text: update discofuse link cc @ekQ
Updating the discofuse link: https://github.com/google-research-datasets/discofuse/commit/fd4b120cb3dd19a417e7f3b5432010b574b5eeee
The CI is failing because the dataset tags for `discofuse` are missing. I'm merging this PR since this is unrelated to this PR, but feel free to open another PR to add the tags here if you have some time:
https://github.com/huggingface/datasets/blob/19408f9fab85c79b966085574cd2da3b90959179/datasets/discofuse/README.md#L1-L5
The missing tags are:
```
'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'pretty_name', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
```
Thanks again ! |
https://github.com/huggingface/datasets/pull/2539 | remove wi_locness dataset due to licensing issues | Hi ! I'm sorry to hear that.
Though we are not redistributing the dataset, we just provide a python script that downloads and process the dataset from its original source hosted at https://www.cl.cam.ac.uk
Therefore I'm not sure what's the issue with licensing. What do you mean exactly ? | It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset. | 48 | text: remove wi_locness dataset due to licensing issues
It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset.
Hi ! I'm sorry to hear that.
Though we are not redistributing the dataset, we just provide a python script that downloads and process the dataset from its original source hosted at https://www.cl.cam.ac.uk
Therefore I'm not sure what's the issue with licensing. What do you mean exactly ? |
https://github.com/huggingface/datasets/pull/2539 | remove wi_locness dataset due to licensing issues | I think that the main issue is that the licesenses of the data are not made clear in the huggingface hub βΒ other people wrongly assumed that the data was license-free, which resulted in commercial use, which is against the licenses.
Is it possible to add the licenses from the original download to huggingface? that would help clear any confusion (licenses can be found here: https://www.cl.cam.ac.uk/research/nl/bea2019st/data/wi+locness_v2.1.bea19.tar.gz) | It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset. | 66 | text: remove wi_locness dataset due to licensing issues
It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset.
I think that the main issue is that the licesenses of the data are not made clear in the huggingface hub βΒ other people wrongly assumed that the data was license-free, which resulted in commercial use, which is against the licenses.
Is it possible to add the licenses from the original download to huggingface? that would help clear any confusion (licenses can be found here: https://www.cl.cam.ac.uk/research/nl/bea2019st/data/wi+locness_v2.1.bea19.tar.gz) |
https://github.com/huggingface/datasets/pull/2539 | remove wi_locness dataset due to licensing issues | Thanks for the clarification @SimonHFL
You're completely right, we need to show the licenses.
I just added them here: https://huggingface.co/datasets/wi_locness#licensing-information | It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset. | 20 | text: remove wi_locness dataset due to licensing issues
It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset.
Thanks for the clarification @SimonHFL
You're completely right, we need to show the licenses.
I just added them here: https://huggingface.co/datasets/wi_locness#licensing-information |
https://github.com/huggingface/datasets/pull/2539 | remove wi_locness dataset due to licensing issues | Hi guys, I'm one of the authors of this dataset.
To clarify, we're happy for you to keep the data in the repo on 2 conditions:
1. You don't host the data yourself.
2. You make it clear that anyone who downloads the data via HuggingFace should read and abide by the license.
I think you've now met these conditions, so we're all good, but I just wanted to make it clear in case there are any issues in the future. Thanks again to @aseifert for bringing this to our attention! :) | It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset. | 92 | text: remove wi_locness dataset due to licensing issues
It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset.
Hi guys, I'm one of the authors of this dataset.
To clarify, we're happy for you to keep the data in the repo on 2 conditions:
1. You don't host the data yourself.
2. You make it clear that anyone who downloads the data via HuggingFace should read and abide by the license.
I think you've now met these conditions, so we're all good, but I just wanted to make it clear in case there are any issues in the future. Thanks again to @aseifert for bringing this to our attention! :) |
https://github.com/huggingface/datasets/pull/2539 | remove wi_locness dataset due to licensing issues | Thanks for your message @chrisjbryant :)
I'm closing this PR then.
And thanks for reporting @aseifert | It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset. | 16 | text: remove wi_locness dataset due to licensing issues
It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset.
Thanks for your message @chrisjbryant :)
I'm closing this PR then.
And thanks for reporting @aseifert |
https://github.com/huggingface/datasets/pull/2537 | Add Parquet loader + from_parquet and to_parquet | `pyarrow` 1.0.0 doesn't support some types in parquet, we'll have to bump its minimum version.
Also I still need to add dummy data to test the parquet builder. | Continuation of #2247
I added a "parquet" dataset builder, as well as the methods `Dataset.from_parquet` and `Dataset.to_parquet`.
As usual, the data are converted to arrow in a batched way to avoid loading everything in memory. | 28 | text: Add Parquet loader + from_parquet and to_parquet
Continuation of #2247
I added a "parquet" dataset builder, as well as the methods `Dataset.from_parquet` and `Dataset.to_parquet`.
As usual, the data are converted to arrow in a batched way to avoid loading everything in memory.
`pyarrow` 1.0.0 doesn't support some types in parquet, we'll have to bump its minimum version.
Also I still need to add dummy data to test the parquet builder. |
https://github.com/huggingface/datasets/pull/2537 | Add Parquet loader + from_parquet and to_parquet | I had to bump the minimum pyarrow version to 3.0.0 to properly support parquet.
Everything is ready for review now :)
I reused pretty much the same tests we had for CSV | Continuation of #2247
I added a "parquet" dataset builder, as well as the methods `Dataset.from_parquet` and `Dataset.to_parquet`.
As usual, the data are converted to arrow in a batched way to avoid loading everything in memory. | 32 | text: Add Parquet loader + from_parquet and to_parquet
Continuation of #2247
I added a "parquet" dataset builder, as well as the methods `Dataset.from_parquet` and `Dataset.to_parquet`.
As usual, the data are converted to arrow in a batched way to avoid loading everything in memory.
I had to bump the minimum pyarrow version to 3.0.0 to properly support parquet.
Everything is ready for review now :)
I reused pretty much the same tests we had for CSV |
https://github.com/huggingface/datasets/pull/2537 | Add Parquet loader + from_parquet and to_parquet | Done !
Now we're still allowing pyarrow>=1.0.0, but when users want to use parquet features they're asked to update to pyarrow>=3.0.0 | Continuation of #2247
I added a "parquet" dataset builder, as well as the methods `Dataset.from_parquet` and `Dataset.to_parquet`.
As usual, the data are converted to arrow in a batched way to avoid loading everything in memory. | 21 | text: Add Parquet loader + from_parquet and to_parquet
Continuation of #2247
I added a "parquet" dataset builder, as well as the methods `Dataset.from_parquet` and `Dataset.to_parquet`.
As usual, the data are converted to arrow in a batched way to avoid loading everything in memory.
Done !
Now we're still allowing pyarrow>=1.0.0, but when users want to use parquet features they're asked to update to pyarrow>=3.0.0 |
https://github.com/huggingface/datasets/pull/2534 | Sync with transformers disabling NOTSET | Nice thanks ! I think there are other places with
```python
not_verbose = bool(logger.getEffectiveLevel() > WARNING)
```
Could you replace them as well ? | Close #2528. | 24 | text: Sync with transformers disabling NOTSET
Close #2528.
Nice thanks ! I think there are other places with
```python
not_verbose = bool(logger.getEffectiveLevel() > WARNING)
```
Could you replace them as well ? |
https://github.com/huggingface/datasets/pull/2530 | Fixed label parsing in the ProductReviews dataset | @lhoestq, can you please review this PR?
What exactly is the problem in the test case? Should it matter? | Fixed issue with parsing dataset labels. | 19 | text: Fixed label parsing in the ProductReviews dataset
Fixed issue with parsing dataset labels.
@lhoestq, can you please review this PR?
What exactly is the problem in the test case? Should it matter? |
https://github.com/huggingface/datasets/pull/2530 | Fixed label parsing in the ProductReviews dataset | Hi ! Thanks for fixing this :)
The CI fails for two reasons:
- the `pretty_name` tag is missing in yaml tags in ./datasets/turkish_product_reviews/README.md. You can fix that by adding this in the yaml tags:
```yaml
pretty_name: Turkish Product Reviews
```
- The test that runs the turkish_product_reviews.py file on the dummy_data.zip data returned 0 examples. Indeed it looks like you changed dummy_data.zip file and now it is an empty zip file. I think you can fix that by reverting your change to the dummy_data.zip file | Fixed issue with parsing dataset labels. | 86 | text: Fixed label parsing in the ProductReviews dataset
Fixed issue with parsing dataset labels.
Hi ! Thanks for fixing this :)
The CI fails for two reasons:
- the `pretty_name` tag is missing in yaml tags in ./datasets/turkish_product_reviews/README.md. You can fix that by adding this in the yaml tags:
```yaml
pretty_name: Turkish Product Reviews
```
- The test that runs the turkish_product_reviews.py file on the dummy_data.zip data returned 0 examples. Indeed it looks like you changed dummy_data.zip file and now it is an empty zip file. I think you can fix that by reverting your change to the dummy_data.zip file |
https://github.com/huggingface/datasets/pull/2530 | Fixed label parsing in the ProductReviews dataset | > Hi ! Thanks for fixing this :)
>
> The CI fails for two reasons:
>
> * the `pretty_name` tag is missing in yaml tags in ./datasets/turkish_product_reviews/README.md. You can fix that by adding this in the yaml tags:
>
>
> ```yaml
> pretty_name: Turkish Product Reviews
> ```
>
> * The test that runs the turkish_product_reviews.py file on the dummy_data.zip data returned 0 examples. Indeed it looks like you changed dummy_data.zip file and now it is an empty zip file. I think you can fix that by reverting your change to the dummy_data.zip file
Many thanks for the quick feedback.
I made the relevant fixes but still got the error :( | Fixed issue with parsing dataset labels. | 115 | text: Fixed label parsing in the ProductReviews dataset
Fixed issue with parsing dataset labels.
> Hi ! Thanks for fixing this :)
>
> The CI fails for two reasons:
>
> * the `pretty_name` tag is missing in yaml tags in ./datasets/turkish_product_reviews/README.md. You can fix that by adding this in the yaml tags:
>
>
> ```yaml
> pretty_name: Turkish Product Reviews
> ```
>
> * The test that runs the turkish_product_reviews.py file on the dummy_data.zip data returned 0 examples. Indeed it looks like you changed dummy_data.zip file and now it is an empty zip file. I think you can fix that by reverting your change to the dummy_data.zip file
Many thanks for the quick feedback.
I made the relevant fixes but still got the error :( |
https://github.com/huggingface/datasets/pull/2530 | Fixed label parsing in the ProductReviews dataset | > Thanks !
> The CI was failing because of the dataset card that was missing some sections. I fixed that.
>
> It's all good now
Super. Thanks for the support. | Fixed issue with parsing dataset labels. | 32 | text: Fixed label parsing in the ProductReviews dataset
Fixed issue with parsing dataset labels.
> Thanks !
> The CI was failing because of the dataset card that was missing some sections. I fixed that.
>
> It's all good now
Super. Thanks for the support. |
https://github.com/huggingface/datasets/pull/2529 | Add summarization template | > Nice thanks !
> Could you just move the test outside of the BaseDatasetTest class please ? Otherwise it will unnecessarily be run twice.
sure, on it! thanks for the explanations about the `self._to` method :) | This PR adds a task template for text summarization. As far as I can tell, we do not need to distinguish between "extractive" or "abstractive" summarization - both can be handled with this template.
Usage:
```python
from datasets import load_dataset
from datasets.tasks import Summarization
ds = load_dataset("xsum", split="train")
# Dataset({
# features: ['document', 'summary', 'id'],
# num_rows: 204045
# })
summarization = Summarization(text_column="document", summary_column="summary")
ds.prepare_for_task(summarization)
# Dataset({
# features: ['text', 'summary'],
# num_rows: 204045
# })
```
| 37 | text: Add summarization template
This PR adds a task template for text summarization. As far as I can tell, we do not need to distinguish between "extractive" or "abstractive" summarization - both can be handled with this template.
Usage:
```python
from datasets import load_dataset
from datasets.tasks import Summarization
ds = load_dataset("xsum", split="train")
# Dataset({
# features: ['document', 'summary', 'id'],
# num_rows: 204045
# })
summarization = Summarization(text_column="document", summary_column="summary")
ds.prepare_for_task(summarization)
# Dataset({
# features: ['text', 'summary'],
# num_rows: 204045
# })
```
> Nice thanks !
> Could you just move the test outside of the BaseDatasetTest class please ? Otherwise it will unnecessarily be run twice.
sure, on it! thanks for the explanations about the `self._to` method :) |
https://github.com/huggingface/datasets/pull/2529 | Add summarization template | @lhoestq i've moved all the task template tests outside of `BaseDatasetTest` and collected them in their dedicated test case. (at some point i'll revisit this so we can just use `pytest` natively, but the PR is already getting out-of-scope :)) | This PR adds a task template for text summarization. As far as I can tell, we do not need to distinguish between "extractive" or "abstractive" summarization - both can be handled with this template.
Usage:
```python
from datasets import load_dataset
from datasets.tasks import Summarization
ds = load_dataset("xsum", split="train")
# Dataset({
# features: ['document', 'summary', 'id'],
# num_rows: 204045
# })
summarization = Summarization(text_column="document", summary_column="summary")
ds.prepare_for_task(summarization)
# Dataset({
# features: ['text', 'summary'],
# num_rows: 204045
# })
```
| 40 | text: Add summarization template
This PR adds a task template for text summarization. As far as I can tell, we do not need to distinguish between "extractive" or "abstractive" summarization - both can be handled with this template.
Usage:
```python
from datasets import load_dataset
from datasets.tasks import Summarization
ds = load_dataset("xsum", split="train")
# Dataset({
# features: ['document', 'summary', 'id'],
# num_rows: 204045
# })
summarization = Summarization(text_column="document", summary_column="summary")
ds.prepare_for_task(summarization)
# Dataset({
# features: ['text', 'summary'],
# num_rows: 204045
# })
```
@lhoestq i've moved all the task template tests outside of `BaseDatasetTest` and collected them in their dedicated test case. (at some point i'll revisit this so we can just use `pytest` natively, but the PR is already getting out-of-scope :)) |
https://github.com/huggingface/datasets/pull/2524 | Raise FileNotFoundError in WindowsFileLock | Hi ! Could you clarify what it fixes exactly and give more details please ? Especially why this is related to the windows hanging error ? | Closes #2443 | 26 | text: Raise FileNotFoundError in WindowsFileLock
Closes #2443
Hi ! Could you clarify what it fixes exactly and give more details please ? Especially why this is related to the windows hanging error ? |
https://github.com/huggingface/datasets/pull/2524 | Raise FileNotFoundError in WindowsFileLock | This has already been merged, but I'll clarify the idea of this PR. Before this merge, FileLock was the only component affected by the max path limit on Windows (that came to my notice) because of its infinite loop that would suppress errors. So instead of suppressing the `FileNotFoundError` that is thrown by `os.open` if the file name is longer than the max allowed path length, this PR reraises it to notify the user. | Closes #2443 | 74 | text: Raise FileNotFoundError in WindowsFileLock
Closes #2443
This has already been merged, but I'll clarify the idea of this PR. Before this merge, FileLock was the only component affected by the max path limit on Windows (that came to my notice) because of its infinite loop that would suppress errors. So instead of suppressing the `FileNotFoundError` that is thrown by `os.open` if the file name is longer than the max allowed path length, this PR reraises it to notify the user. |
https://github.com/huggingface/datasets/pull/2519 | Improve performance of pandas arrow extractor | Looks like this change
```
pa_table[pa_table.column_names[0]].to_pandas(types_mapper=pandas_types_mapper)
```
doesn't return a Series with the correct type.
This is related to https://issues.apache.org/jira/browse/ARROW-9664
Since the types_mapper isn't taken into account, the ArrayXD types are not converted to the correct pandas extension dtype | While reviewing PR #2505, I noticed that pandas arrow extractor could be refactored to be faster. | 39 | text: Improve performance of pandas arrow extractor
While reviewing PR #2505, I noticed that pandas arrow extractor could be refactored to be faster.
Looks like this change
```
pa_table[pa_table.column_names[0]].to_pandas(types_mapper=pandas_types_mapper)
```
doesn't return a Series with the correct type.
This is related to https://issues.apache.org/jira/browse/ARROW-9664
Since the types_mapper isn't taken into account, the ArrayXD types are not converted to the correct pandas extension dtype |
https://github.com/huggingface/datasets/pull/2509 | Fix fingerprint when moving cache dir | Yea issues on windows are about long paths, not long filenames.
We can make sure the lock filenames are not too long, but not for the paths | The fingerprint of a dataset changes if the cache directory is moved.
I fixed that by setting the fingerprint to be the hash of:
- the relative cache dir (dataset_name/version/config_id)
- the requested split
Close #2496
I had to fix an issue with the filelock filename that was too long (>255). It prevented the tests to run on my machine. I just added `hash_filename_if_too_long` in case this happens, to not get filenames longer than 255.
We usually have long filenames for filelocks because they are named after the path that is being locked. In case the path is a cache directory that has long directory names, then the filelock filename could en up being very long. | 27 | text: Fix fingerprint when moving cache dir
The fingerprint of a dataset changes if the cache directory is moved.
I fixed that by setting the fingerprint to be the hash of:
- the relative cache dir (dataset_name/version/config_id)
- the requested split
Close #2496
I had to fix an issue with the filelock filename that was too long (>255). It prevented the tests to run on my machine. I just added `hash_filename_if_too_long` in case this happens, to not get filenames longer than 255.
We usually have long filenames for filelocks because they are named after the path that is being locked. In case the path is a cache directory that has long directory names, then the filelock filename could en up being very long.
Yea issues on windows are about long paths, not long filenames.
We can make sure the lock filenames are not too long, but not for the paths |
https://github.com/huggingface/datasets/pull/2505 | Make numpy arrow extractor faster | Looks like we have a nice speed up in some benchmarks. For example:
- `read_formatted numpy 5000`: 4.584777 sec -> 0.487113 sec
- `read_formatted torch 5000`: 4.565676 sec -> 1.289514 sec | I changed the NumpyArrowExtractor to call directly to_numpy and see if it can lead to speed-ups as discussed in https://github.com/huggingface/datasets/issues/2498
This could make the numpy/torch/tf/jax formatting faster | 31 | text: Make numpy arrow extractor faster
I changed the NumpyArrowExtractor to call directly to_numpy and see if it can lead to speed-ups as discussed in https://github.com/huggingface/datasets/issues/2498
This could make the numpy/torch/tf/jax formatting faster
Looks like we have a nice speed up in some benchmarks. For example:
- `read_formatted numpy 5000`: 4.584777 sec -> 0.487113 sec
- `read_formatted torch 5000`: 4.565676 sec -> 1.289514 sec |
https://github.com/huggingface/datasets/pull/2505 | Make numpy arrow extractor faster | @lhoestq I tried the branch and it works for me. Although performance trace now shows a speedup, the overall pre-training speed up is minimal. But that's on my plate to explore further. | I changed the NumpyArrowExtractor to call directly to_numpy and see if it can lead to speed-ups as discussed in https://github.com/huggingface/datasets/issues/2498
This could make the numpy/torch/tf/jax formatting faster | 32 | text: Make numpy arrow extractor faster
I changed the NumpyArrowExtractor to call directly to_numpy and see if it can lead to speed-ups as discussed in https://github.com/huggingface/datasets/issues/2498
This could make the numpy/torch/tf/jax formatting faster
@lhoestq I tried the branch and it works for me. Although performance trace now shows a speedup, the overall pre-training speed up is minimal. But that's on my plate to explore further. |
https://github.com/huggingface/datasets/pull/2505 | Make numpy arrow extractor faster | Thanks for investigating @vblagoje
@albertvillanova , do you have any comments on this PR ? Otherwise I think we can merge it | I changed the NumpyArrowExtractor to call directly to_numpy and see if it can lead to speed-ups as discussed in https://github.com/huggingface/datasets/issues/2498
This could make the numpy/torch/tf/jax formatting faster | 22 | text: Make numpy arrow extractor faster
I changed the NumpyArrowExtractor to call directly to_numpy and see if it can lead to speed-ups as discussed in https://github.com/huggingface/datasets/issues/2498
This could make the numpy/torch/tf/jax formatting faster
Thanks for investigating @vblagoje
@albertvillanova , do you have any comments on this PR ? Otherwise I think we can merge it |
https://github.com/huggingface/datasets/pull/2500 | Add load_dataset_builder | Hi @mariosasko, thanks for taking on this issue.
Just a few logistic suggestions, as you are one of our most active contributors β€οΈ :
- When you start working on an issue, you can self-assign it to you by commenting on the issue page with the keyword: `#self-assign`; we have implemented a GitHub Action to take care of that... π
- When you are still working on your Pull Request, instead of using the `[WIP]` in the PR name, you can instead create a *draft* pull request: use the drop-down (on the right of the *Create Pull Request* button) and select **Create Draft Pull Request**, then click **Draft Pull Request**.
I hope you find these hints useful. π€ | Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself.
TODOs:
- [x] Add docstring and entry in the docs
- [x] Add tests
Closes #2484
| 118 | text: Add load_dataset_builder
Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself.
TODOs:
- [x] Add docstring and entry in the docs
- [x] Add tests
Closes #2484
Hi @mariosasko, thanks for taking on this issue.
Just a few logistic suggestions, as you are one of our most active contributors β€οΈ :
- When you start working on an issue, you can self-assign it to you by commenting on the issue page with the keyword: `#self-assign`; we have implemented a GitHub Action to take care of that... π
- When you are still working on your Pull Request, instead of using the `[WIP]` in the PR name, you can instead create a *draft* pull request: use the drop-down (on the right of the *Create Pull Request* button) and select **Create Draft Pull Request**, then click **Draft Pull Request**.
I hope you find these hints useful. π€ |
https://github.com/huggingface/datasets/pull/2500 | Add load_dataset_builder | @albertvillanova Thanks for the tips. When creating this PR, it slipped my mind that this should be a draft. GH has an option to convert already created PRs to draft PRs, but this requires write access for the repo, so maybe you can help. | Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself.
TODOs:
- [x] Add docstring and entry in the docs
- [x] Add tests
Closes #2484
| 44 | text: Add load_dataset_builder
Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself.
TODOs:
- [x] Add docstring and entry in the docs
- [x] Add tests
Closes #2484
@albertvillanova Thanks for the tips. When creating this PR, it slipped my mind that this should be a draft. GH has an option to convert already created PRs to draft PRs, but this requires write access for the repo, so maybe you can help. |
https://github.com/huggingface/datasets/pull/2500 | Add load_dataset_builder | Ready for the review!
One additional change. I've modified the `camelcase_to_snakecase`/`snakecase_to_camelcase` conversion functions to fix conversion of the names with 2 or more underscores (e.g. `camelcase_to_snakecase("__DummyDataset__")` would return `___dummy_dataset__`; notice one extra underscore at the beginning). The implementation is based on the [inflection](https://pypi.org/project/inflection/) library.
| Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself.
TODOs:
- [x] Add docstring and entry in the docs
- [x] Add tests
Closes #2484
| 44 | text: Add load_dataset_builder
Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself.
TODOs:
- [x] Add docstring and entry in the docs
- [x] Add tests
Closes #2484
Ready for the review!
One additional change. I've modified the `camelcase_to_snakecase`/`snakecase_to_camelcase` conversion functions to fix conversion of the names with 2 or more underscores (e.g. `camelcase_to_snakecase("__DummyDataset__")` would return `___dummy_dataset__`; notice one extra underscore at the beginning). The implementation is based on the [inflection](https://pypi.org/project/inflection/) library.
|
https://github.com/huggingface/datasets/pull/2500 | Add load_dataset_builder | Thank you for adding this feature, @mariosasko - this is really awesome!
Tried with:
```
python -c "from datasets import load_dataset_builder; b = load_dataset_builder('openwebtext-10k'); print(b.cache_dir)"
Using the latest cached version of the module from /home/stas/.cache/huggingface/modules/datasets_modules/datasets
/openwebtext-10k/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b (last modified on Wed May 12
20:22:53 2021)
since it couldn't be found locally at openwebtext-10k/openwebtext-10k.py
or remotely (FileNotFoundError).
/home/stas/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b
```
The logger message (edited by me to add new lines to point the issues out) is a bit confusing to the user - that is what does `FileNotFoundError` refer to?
1. May be replace `FileNotFoundError` with where it was looking for a file online. But then the remote file is there - it's found
2. I'm not sure why it says "since it couldn't be found locally" - as it is locally found at the cache folder and again what does " locally at openwebtext-10k/openwebtext-10k.py" mean - i.e. where does it look for it? Is it `./openwebtext-10k/openwebtext-10k.py` it's looking for? or in some specific dir?
If the cached version always supersedes any other versions perhaps this is what it should say?
```
found cached version at xxx, not looking for a local at yyy, not downloading remote at zzz
``` | Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself.
TODOs:
- [x] Add docstring and entry in the docs
- [x] Add tests
Closes #2484
| 197 | text: Add load_dataset_builder
Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself.
TODOs:
- [x] Add docstring and entry in the docs
- [x] Add tests
Closes #2484
Thank you for adding this feature, @mariosasko - this is really awesome!
Tried with:
```
python -c "from datasets import load_dataset_builder; b = load_dataset_builder('openwebtext-10k'); print(b.cache_dir)"
Using the latest cached version of the module from /home/stas/.cache/huggingface/modules/datasets_modules/datasets
/openwebtext-10k/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b (last modified on Wed May 12
20:22:53 2021)
since it couldn't be found locally at openwebtext-10k/openwebtext-10k.py
or remotely (FileNotFoundError).
/home/stas/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b
```
The logger message (edited by me to add new lines to point the issues out) is a bit confusing to the user - that is what does `FileNotFoundError` refer to?
1. May be replace `FileNotFoundError` with where it was looking for a file online. But then the remote file is there - it's found
2. I'm not sure why it says "since it couldn't be found locally" - as it is locally found at the cache folder and again what does " locally at openwebtext-10k/openwebtext-10k.py" mean - i.e. where does it look for it? Is it `./openwebtext-10k/openwebtext-10k.py` it's looking for? or in some specific dir?
If the cached version always supersedes any other versions perhaps this is what it should say?
```
found cached version at xxx, not looking for a local at yyy, not downloading remote at zzz
``` |
https://github.com/huggingface/datasets/pull/2500 | Add load_dataset_builder | Hi ! Thanks for the comments
Regarding your last message:
You must pass `stas/openwebtext-10k` as in `load_dataset` instead of `openwebtext-10k`. Otherwise it doesn't know how to retrieve the builder from the HF Hub.
When you specify a dataset name without a slash, it tries to load a canonical dataset or it looks locally at ./openwebtext-10k/openwebtext-10k.py
Here since `openwebtext-10k` is not a canonical dataset and doesn't exist locally at ./openwebtext-10k/openwebtext-10k.py: it raised a FileNotFoundError.
As a fallback it managed to find the dataset script in your cache and it used this one. | Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself.
TODOs:
- [x] Add docstring and entry in the docs
- [x] Add tests
Closes #2484
| 91 | text: Add load_dataset_builder
Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself.
TODOs:
- [x] Add docstring and entry in the docs
- [x] Add tests
Closes #2484
Hi ! Thanks for the comments
Regarding your last message:
You must pass `stas/openwebtext-10k` as in `load_dataset` instead of `openwebtext-10k`. Otherwise it doesn't know how to retrieve the builder from the HF Hub.
When you specify a dataset name without a slash, it tries to load a canonical dataset or it looks locally at ./openwebtext-10k/openwebtext-10k.py
Here since `openwebtext-10k` is not a canonical dataset and doesn't exist locally at ./openwebtext-10k/openwebtext-10k.py: it raised a FileNotFoundError.
As a fallback it managed to find the dataset script in your cache and it used this one. |
https://github.com/huggingface/datasets/pull/2500 | Add load_dataset_builder | Oh, I see, so I actually used an incorrect input. so it was a user error. Correcting it:
```
python -c "from datasets import load_dataset_builder; b = load_dataset_builder('stas/openwebtext-10k'); print(b.cache_dir)"
/home/stas/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b
```
Now there is no logger message. Got it!
OK, I'm not sure the magical recovery it did in first place is most beneficial in the long run. I'd have rather it failed and said: "incorrect input there is no such dataset as 'openwebtext-10k' at <this path> or <this url>" - because if it doesn't fail I may leave it in the code and it'll fail later when another user tries to use my code and won't have the cache. Does it make sense? Giving me `this url` allows me to go to the datasets hub and realize that the dataset is missing the username qualifier.
> Here since openwebtext-10k is not a canonical dataset and doesn't exist locally at ./openwebtext-10k/openwebtext-10k.py: it raised a FileNotFoundError.
Except it slapped the exception name to ` remotely (FileNotFoundError).` which makes no sense.
Plus for the local it's not clear where is it looking relatively too when it gets `FileNotFoundError` - perhaps it'd help to use absolute path and use it in the message?
---------------
Finally, the logger format is not set up so the user gets a warning w/o knowing it's a warning. As you can see it's missing the WARNING pre-amble in https://github.com/huggingface/datasets/pull/2500#issuecomment-874250500
i.e. I had no idea it was warning me of something, I was just trying to make sense of the message that's why I started the discussion and otherwise I'd have completely missed the point of me making an error. | Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself.
TODOs:
- [x] Add docstring and entry in the docs
- [x] Add tests
Closes #2484
| 271 | text: Add load_dataset_builder
Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself.
TODOs:
- [x] Add docstring and entry in the docs
- [x] Add tests
Closes #2484
Oh, I see, so I actually used an incorrect input. so it was a user error. Correcting it:
```
python -c "from datasets import load_dataset_builder; b = load_dataset_builder('stas/openwebtext-10k'); print(b.cache_dir)"
/home/stas/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b
```
Now there is no logger message. Got it!
OK, I'm not sure the magical recovery it did in first place is most beneficial in the long run. I'd have rather it failed and said: "incorrect input there is no such dataset as 'openwebtext-10k' at <this path> or <this url>" - because if it doesn't fail I may leave it in the code and it'll fail later when another user tries to use my code and won't have the cache. Does it make sense? Giving me `this url` allows me to go to the datasets hub and realize that the dataset is missing the username qualifier.
> Here since openwebtext-10k is not a canonical dataset and doesn't exist locally at ./openwebtext-10k/openwebtext-10k.py: it raised a FileNotFoundError.
Except it slapped the exception name to ` remotely (FileNotFoundError).` which makes no sense.
Plus for the local it's not clear where is it looking relatively too when it gets `FileNotFoundError` - perhaps it'd help to use absolute path and use it in the message?
---------------
Finally, the logger format is not set up so the user gets a warning w/o knowing it's a warning. As you can see it's missing the WARNING pre-amble in https://github.com/huggingface/datasets/pull/2500#issuecomment-874250500
i.e. I had no idea it was warning me of something, I was just trying to make sense of the message that's why I started the discussion and otherwise I'd have completely missed the point of me making an error. |
https://github.com/huggingface/datasets/pull/2497 | Use default cast for sliced list arrays if pyarrow >= 4 | I believe we don't use PyArrow >= 4.0.0 because of some segfault issues:
https://github.com/huggingface/datasets/blob/1206ffbcd42dda415f6bfb3d5040708f50413c93/setup.py#L78
Can you confirm @lhoestq ? | From pyarrow version 4, it is supported to cast sliced lists.
This PR uses default pyarrow cast in Datasets to cast sliced list arrays if pyarrow version is >= 4.
In relation with PR #2461 and #2490.
cc: @lhoestq, @abhi1thakur, @SBrandeis | 19 | text: Use default cast for sliced list arrays if pyarrow >= 4
From pyarrow version 4, it is supported to cast sliced lists.
This PR uses default pyarrow cast in Datasets to cast sliced list arrays if pyarrow version is >= 4.
In relation with PR #2461 and #2490.
cc: @lhoestq, @abhi1thakur, @SBrandeis
I believe we don't use PyArrow >= 4.0.0 because of some segfault issues:
https://github.com/huggingface/datasets/blob/1206ffbcd42dda415f6bfb3d5040708f50413c93/setup.py#L78
Can you confirm @lhoestq ? |
https://github.com/huggingface/datasets/pull/2486 | Add Rico Dataset | Hi ! Thanks for adding this dataset :)
Regarding your questions:
1. We can have them as different configuration of the `rico` dataset
2. Yes please use the path to the image and not open the image directly, so that we can let users open the image one at at time during training if they want to for example. In the future we'll have an Image feature type that will decode the encoded image data on the fly when accessing the examples.
3. Feel free to keep the hierarchies as strings if they don't follow a fixed format
4. You can just return the path
| Hi there!
I'm wanting to add the Rico datasets for software engineering type data to y'alls awesome library. However, as I have started coding, I've ran into a few hiccups so I thought it best to open the PR early to get a bit of discussion on how the Rico datasets should be added to the `datasets` lib.
1) There are 7 different datasets under Rico and so I was wondering, should I make a folder for each or should I put them as different configurations of a single dataset?
You can see the datasets available for Rico here: http://interactionmining.org/rico
2) As of right now, I have a semi working version of the first dataset which has pairs of screenshots and hierarchies from android applications. However, these screenshots are very large (1440, 2560, 3) and there are 66,000 of them so I am not able to perform the processing that the `datasets` lib does after downloading and extracting the dataset since I run out of memory very fast. Is there a way to have `datasets` lib not put everything into memory while it is processing the dataset?
2.1) If there is not a way, would it be better to just return the path to the screenshots instead of the actual image?
3) The hierarchies are JSON objects and looking through the documentation of `datasets`, I didn't see any feature that I could use for this type of data. So, for now I just have it being read in as a string, is this okay or should I be doing it differently?
4) One of the Rico datasets is a bunch of animations (GIFs), is there a `datasets` feature that I can put this type of data into or should I just return the path as a string?
I appreciate any and all help I can get for this PR, I think the Rico datasets will be an awesome addition to the library :nerd_face: ! | 105 | text: Add Rico Dataset
Hi there!
I'm wanting to add the Rico datasets for software engineering type data to y'alls awesome library. However, as I have started coding, I've ran into a few hiccups so I thought it best to open the PR early to get a bit of discussion on how the Rico datasets should be added to the `datasets` lib.
1) There are 7 different datasets under Rico and so I was wondering, should I make a folder for each or should I put them as different configurations of a single dataset?
You can see the datasets available for Rico here: http://interactionmining.org/rico
2) As of right now, I have a semi working version of the first dataset which has pairs of screenshots and hierarchies from android applications. However, these screenshots are very large (1440, 2560, 3) and there are 66,000 of them so I am not able to perform the processing that the `datasets` lib does after downloading and extracting the dataset since I run out of memory very fast. Is there a way to have `datasets` lib not put everything into memory while it is processing the dataset?
2.1) If there is not a way, would it be better to just return the path to the screenshots instead of the actual image?
3) The hierarchies are JSON objects and looking through the documentation of `datasets`, I didn't see any feature that I could use for this type of data. So, for now I just have it being read in as a string, is this okay or should I be doing it differently?
4) One of the Rico datasets is a bunch of animations (GIFs), is there a `datasets` feature that I can put this type of data into or should I just return the path as a string?
I appreciate any and all help I can get for this PR, I think the Rico datasets will be an awesome addition to the library :nerd_face: !
Hi ! Thanks for adding this dataset :)
Regarding your questions:
1. We can have them as different configuration of the `rico` dataset
2. Yes please use the path to the image and not open the image directly, so that we can let users open the image one at at time during training if they want to for example. In the future we'll have an Image feature type that will decode the encoded image data on the fly when accessing the examples.
3. Feel free to keep the hierarchies as strings if they don't follow a fixed format
4. You can just return the path
|
https://github.com/huggingface/datasets/pull/2483 | Use gc.collect only when needed to avoid slow downs | I continue thinking that the origin of the issue has to do with tqdm (and not with Arrow): this issue only arises for version 4.50.0 (and later) of tqdm, not for previous versions of tqdm.
My guess is that tqdm made a change from version 4.50.0 that does not properly release the iterable. | In https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 we added a call to gc.collect to resolve some issues on windows (see https://github.com/huggingface/datasets/pull/2482)
However calling gc.collect too often causes significant slow downs (the CI run time doubled).
So I just moved the gc.collect call to the exact place where it's actually needed: when post-processing a dataset | 53 | text: Use gc.collect only when needed to avoid slow downs
In https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 we added a call to gc.collect to resolve some issues on windows (see https://github.com/huggingface/datasets/pull/2482)
However calling gc.collect too often causes significant slow downs (the CI run time doubled).
So I just moved the gc.collect call to the exact place where it's actually needed: when post-processing a dataset
I continue thinking that the origin of the issue has to do with tqdm (and not with Arrow): this issue only arises for version 4.50.0 (and later) of tqdm, not for previous versions of tqdm.
My guess is that tqdm made a change from version 4.50.0 that does not properly release the iterable. |
https://github.com/huggingface/datasets/pull/2477 | Fix docs custom stable version | I see that @lhoestq overlooked this PR with his commit 07e2b05. π’
I'm adding a script so that this issue does not happen again.
| Currently docs default version is 1.5.0. This PR fixes this and sets the latest version instead. | 24 | text: Fix docs custom stable version
Currently docs default version is 1.5.0. This PR fixes this and sets the latest version instead.
I see that @lhoestq overlooked this PR with his commit 07e2b05. π’
I'm adding a script so that this issue does not happen again.
|
https://github.com/huggingface/datasets/pull/2477 | Fix docs custom stable version | For the moment, the script only includes `update_custom_js`, but in a follow-up PR I will include all the required steps to make a package release. | Currently docs default version is 1.5.0. This PR fixes this and sets the latest version instead. | 25 | text: Fix docs custom stable version
Currently docs default version is 1.5.0. This PR fixes this and sets the latest version instead.
For the moment, the script only includes `update_custom_js`, but in a follow-up PR I will include all the required steps to make a package release. |
https://github.com/huggingface/datasets/pull/2477 | Fix docs custom stable version | I think we just need to clarify the release process in setup.py instead of adding a script that does the replacement | Currently docs default version is 1.5.0. This PR fixes this and sets the latest version instead. | 21 | text: Fix docs custom stable version
Currently docs default version is 1.5.0. This PR fixes this and sets the latest version instead.
I think we just need to clarify the release process in setup.py instead of adding a script that does the replacement |
https://github.com/huggingface/datasets/pull/2477 | Fix docs custom stable version | @lhoestq I really think we should implement a script that performs the release (instead of doing it manually as it is done now), as it is already the case in `transformers`. I will do it in a next PR.
For the moment, this PR includes one of the steps of the release script. | Currently docs default version is 1.5.0. This PR fixes this and sets the latest version instead. | 53 | text: Fix docs custom stable version
Currently docs default version is 1.5.0. This PR fixes this and sets the latest version instead.
@lhoestq I really think we should implement a script that performs the release (instead of doing it manually as it is done now), as it is already the case in `transformers`. I will do it in a next PR.
For the moment, this PR includes one of the steps of the release script. |
https://github.com/huggingface/datasets/pull/2476 | Add TimeDial | Hi @lhoestq,
I've pushed the updated README and tags. Let me know if anything is missing/needs some improvement!
~PS. I don't know why it's not triggering the build~ | Dataset: https://github.com/google-research-datasets/TimeDial
To-Do: Update README.md and add YAML tags | 28 | text: Add TimeDial
Dataset: https://github.com/google-research-datasets/TimeDial
To-Do: Update README.md and add YAML tags
Hi @lhoestq,
I've pushed the updated README and tags. Let me know if anything is missing/needs some improvement!
~PS. I don't know why it's not triggering the build~ |
https://github.com/huggingface/datasets/pull/2473 | Add Disfl-QA | Sounds great! It'll make things easier for the user while accessing the dataset. I'll make some changes to the current file then. | Dataset: https://github.com/google-research-datasets/disfl-qa
To-Do: Update README.md and add YAML tags | 22 | text: Add Disfl-QA
Dataset: https://github.com/google-research-datasets/disfl-qa
To-Do: Update README.md and add YAML tags
Sounds great! It'll make things easier for the user while accessing the dataset. I'll make some changes to the current file then. |
https://github.com/huggingface/datasets/pull/2473 | Add Disfl-QA | I've updated with the suggested changes. Updated the README, YAML tags as well (not sure of Size category tag as I couldn't pass the path of `dataset_infos.json` for this dataset)
| Dataset: https://github.com/google-research-datasets/disfl-qa
To-Do: Update README.md and add YAML tags | 30 | text: Add Disfl-QA
Dataset: https://github.com/google-research-datasets/disfl-qa
To-Do: Update README.md and add YAML tags
I've updated with the suggested changes. Updated the README, YAML tags as well (not sure of Size category tag as I couldn't pass the path of `dataset_infos.json` for this dataset)
|
https://github.com/huggingface/datasets/pull/2469 | Bump tqdm version | i tried both the latest version of `tqdm` and the version required by `autonlp` - no luck with windows π
it's very weird that a progress bar would trigger these kind of errors, so i'll have a look to see if it's something unique to `datasets` | 46 | text: Bump tqdm version
i tried both the latest version of `tqdm` and the version required by `autonlp` - no luck with windows π
it's very weird that a progress bar would trigger these kind of errors, so i'll have a look to see if it's something unique to `datasets` |
|
https://github.com/huggingface/datasets/pull/2465 | adding masahaner dataset | Thanks a lot for the corrections and comments.
I have resolved point 2. The make style still throws some errors, please see below
black --line-length 119 --target-version py36 tests src benchmarks datasets/**/*.py metrics
/bin/sh: 1: black: not found
Makefile:13: recipe for target 'style' failed
make: *** [style] Error 127
Can you help to resolve this? | Adding Masakhane dataset https://github.com/masakhane-io/masakhane-ner
@lhoestq , can you please review | 55 | text: adding masahaner dataset
Adding Masakhane dataset https://github.com/masakhane-io/masakhane-ner
@lhoestq , can you please review
Thanks a lot for the corrections and comments.
I have resolved point 2. The make style still throws some errors, please see below
black --line-length 119 --target-version py36 tests src benchmarks datasets/**/*.py metrics
/bin/sh: 1: black: not found
Makefile:13: recipe for target 'style' failed
make: *** [style] Error 127
Can you help to resolve this? |
https://github.com/huggingface/datasets/pull/2457 | Add align_labels_with_mapping function | @lhoestq thanks for the feedback - it's now integrated :)
i also added a comment about sorting the input label IDs | This PR adds a helper function to align the `label2id` mapping between a `datasets.Dataset` and a classifier (e.g. a transformer with a `PretrainedConfig.label2id` dict), with the alignment performed on the dataset itself.
This will help us with the Hub evaluation, where we won't know in advance whether a model that is fine-tuned on say MNLI has the same mappings as the MNLI dataset we load from `datasets`.
An example where this is needed is if we naively try to evaluate `microsoft/deberta-base-mnli` on `mnli` because the model config has the following mappings:
```python
"id2label": {
"0": "CONTRADICTION",
"1": "NEUTRAL",
"2": "ENTAILMENT"
},
"label2id": {
"CONTRADICTION": 0,
"ENTAILMENT": 2,
"NEUTRAL": 1
}
```
while the `mnli` dataset has the `contradiction` and `neutral` labels swapped:
```python
id2label = {0: 'entailment', 1: 'neutral', 2: 'contradiction'}
label2id = {'contradiction': 2, 'entailment': 0, 'neutral': 1}
```
As a result, we get a much lower accuracy during evaluation:
```python
from datasets import load_dataset
from transformers.trainer_utils import EvalPrediction
from transformers import AutoModelForSequenceClassification, Trainer
# load dataset for evaluation
mnli = load_dataset("glue", "mnli", split="test")
# load model
model_ckpt = "microsoft/deberta-base-mnli"
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
# preprocess, create trainer ...
mnli_enc = ...
trainer = Trainer(model, args=args, tokenizer=tokenizer)
# generate preds
preds = trainer.predict(mnli_enc)
# preds.label_ids misalinged with model.config => returns wrong accuracy (too low)!
compute_metrics(EvalPrediction(preds.predictions, preds.label_ids))
```
The fix is to use the helper function before running the evaluation to make sure the label IDs are aligned:
```python
mnli_enc_aligned = mnli_enc.align_labels_with_mapping(label2id=config.label2id, label_column="label")
# preds now aligned and everyone is happy :)
preds = trainer.predict(mnli_enc_aligned)
```
cc @thomwolf @lhoestq | 21 | text: Add align_labels_with_mapping function
This PR adds a helper function to align the `label2id` mapping between a `datasets.Dataset` and a classifier (e.g. a transformer with a `PretrainedConfig.label2id` dict), with the alignment performed on the dataset itself.
This will help us with the Hub evaluation, where we won't know in advance whether a model that is fine-tuned on say MNLI has the same mappings as the MNLI dataset we load from `datasets`.
An example where this is needed is if we naively try to evaluate `microsoft/deberta-base-mnli` on `mnli` because the model config has the following mappings:
```python
"id2label": {
"0": "CONTRADICTION",
"1": "NEUTRAL",
"2": "ENTAILMENT"
},
"label2id": {
"CONTRADICTION": 0,
"ENTAILMENT": 2,
"NEUTRAL": 1
}
```
while the `mnli` dataset has the `contradiction` and `neutral` labels swapped:
```python
id2label = {0: 'entailment', 1: 'neutral', 2: 'contradiction'}
label2id = {'contradiction': 2, 'entailment': 0, 'neutral': 1}
```
As a result, we get a much lower accuracy during evaluation:
```python
from datasets import load_dataset
from transformers.trainer_utils import EvalPrediction
from transformers import AutoModelForSequenceClassification, Trainer
# load dataset for evaluation
mnli = load_dataset("glue", "mnli", split="test")
# load model
model_ckpt = "microsoft/deberta-base-mnli"
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
# preprocess, create trainer ...
mnli_enc = ...
trainer = Trainer(model, args=args, tokenizer=tokenizer)
# generate preds
preds = trainer.predict(mnli_enc)
# preds.label_ids misalinged with model.config => returns wrong accuracy (too low)!
compute_metrics(EvalPrediction(preds.predictions, preds.label_ids))
```
The fix is to use the helper function before running the evaluation to make sure the label IDs are aligned:
```python
mnli_enc_aligned = mnli_enc.align_labels_with_mapping(label2id=config.label2id, label_column="label")
# preds now aligned and everyone is happy :)
preds = trainer.predict(mnli_enc_aligned)
```
cc @thomwolf @lhoestq
@lhoestq thanks for the feedback - it's now integrated :)
i also added a comment about sorting the input label IDs |
https://github.com/huggingface/datasets/pull/2457 | Add align_labels_with_mapping function | > Thanks ! Looks all good now :)
>
> We will also need to have the `DatasetDict.align_labels_with_mapping` method. Let me quickly add it
thanks a lot! i always forget about `DatasetDict` - will be happy when it's just one "dataset" object :) | This PR adds a helper function to align the `label2id` mapping between a `datasets.Dataset` and a classifier (e.g. a transformer with a `PretrainedConfig.label2id` dict), with the alignment performed on the dataset itself.
This will help us with the Hub evaluation, where we won't know in advance whether a model that is fine-tuned on say MNLI has the same mappings as the MNLI dataset we load from `datasets`.
An example where this is needed is if we naively try to evaluate `microsoft/deberta-base-mnli` on `mnli` because the model config has the following mappings:
```python
"id2label": {
"0": "CONTRADICTION",
"1": "NEUTRAL",
"2": "ENTAILMENT"
},
"label2id": {
"CONTRADICTION": 0,
"ENTAILMENT": 2,
"NEUTRAL": 1
}
```
while the `mnli` dataset has the `contradiction` and `neutral` labels swapped:
```python
id2label = {0: 'entailment', 1: 'neutral', 2: 'contradiction'}
label2id = {'contradiction': 2, 'entailment': 0, 'neutral': 1}
```
As a result, we get a much lower accuracy during evaluation:
```python
from datasets import load_dataset
from transformers.trainer_utils import EvalPrediction
from transformers import AutoModelForSequenceClassification, Trainer
# load dataset for evaluation
mnli = load_dataset("glue", "mnli", split="test")
# load model
model_ckpt = "microsoft/deberta-base-mnli"
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
# preprocess, create trainer ...
mnli_enc = ...
trainer = Trainer(model, args=args, tokenizer=tokenizer)
# generate preds
preds = trainer.predict(mnli_enc)
# preds.label_ids misalinged with model.config => returns wrong accuracy (too low)!
compute_metrics(EvalPrediction(preds.predictions, preds.label_ids))
```
The fix is to use the helper function before running the evaluation to make sure the label IDs are aligned:
```python
mnli_enc_aligned = mnli_enc.align_labels_with_mapping(label2id=config.label2id, label_column="label")
# preds now aligned and everyone is happy :)
preds = trainer.predict(mnli_enc_aligned)
```
cc @thomwolf @lhoestq | 43 | text: Add align_labels_with_mapping function
This PR adds a helper function to align the `label2id` mapping between a `datasets.Dataset` and a classifier (e.g. a transformer with a `PretrainedConfig.label2id` dict), with the alignment performed on the dataset itself.
This will help us with the Hub evaluation, where we won't know in advance whether a model that is fine-tuned on say MNLI has the same mappings as the MNLI dataset we load from `datasets`.
An example where this is needed is if we naively try to evaluate `microsoft/deberta-base-mnli` on `mnli` because the model config has the following mappings:
```python
"id2label": {
"0": "CONTRADICTION",
"1": "NEUTRAL",
"2": "ENTAILMENT"
},
"label2id": {
"CONTRADICTION": 0,
"ENTAILMENT": 2,
"NEUTRAL": 1
}
```
while the `mnli` dataset has the `contradiction` and `neutral` labels swapped:
```python
id2label = {0: 'entailment', 1: 'neutral', 2: 'contradiction'}
label2id = {'contradiction': 2, 'entailment': 0, 'neutral': 1}
```
As a result, we get a much lower accuracy during evaluation:
```python
from datasets import load_dataset
from transformers.trainer_utils import EvalPrediction
from transformers import AutoModelForSequenceClassification, Trainer
# load dataset for evaluation
mnli = load_dataset("glue", "mnli", split="test")
# load model
model_ckpt = "microsoft/deberta-base-mnli"
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
# preprocess, create trainer ...
mnli_enc = ...
trainer = Trainer(model, args=args, tokenizer=tokenizer)
# generate preds
preds = trainer.predict(mnli_enc)
# preds.label_ids misalinged with model.config => returns wrong accuracy (too low)!
compute_metrics(EvalPrediction(preds.predictions, preds.label_ids))
```
The fix is to use the helper function before running the evaluation to make sure the label IDs are aligned:
```python
mnli_enc_aligned = mnli_enc.align_labels_with_mapping(label2id=config.label2id, label_column="label")
# preds now aligned and everyone is happy :)
preds = trainer.predict(mnli_enc_aligned)
```
cc @thomwolf @lhoestq
> Thanks ! Looks all good now :)
>
> We will also need to have the `DatasetDict.align_labels_with_mapping` method. Let me quickly add it
thanks a lot! i always forget about `DatasetDict` - will be happy when it's just one "dataset" object :) |
https://github.com/huggingface/datasets/pull/2455 | Update version in xor_tydi_qa.py | Hi ! Thanks for updating the version
> Should I revert to the old dummy/1.0.0 or delete it and keep only dummy/1.1.0?
Feel free to delete the old dummy data files
| Fix #2449
@lhoestq Should I revert to the old `dummy/1.0.0` or delete it and keep only `dummy/1.1.0`? | 31 | text: Update version in xor_tydi_qa.py
Fix #2449
@lhoestq Should I revert to the old `dummy/1.0.0` or delete it and keep only `dummy/1.1.0`?
Hi ! Thanks for updating the version
> Should I revert to the old dummy/1.0.0 or delete it and keep only dummy/1.1.0?
Feel free to delete the old dummy data files
|
https://github.com/huggingface/datasets/pull/2453 | Keep original features order | The arrow writer was supposing that the columns were always in the sorted order. I just pushed a fix to reorder the arrays accordingly to the schema. It was failing for many datasets like squad | When loading a Dataset from a JSON file whose column names are not sorted alphabetically, we should get the same column name order, whether we pass features (in the same order as in the file) or not.
I found this issue while working on #2366. | 35 | text: Keep original features order
When loading a Dataset from a JSON file whose column names are not sorted alphabetically, we should get the same column name order, whether we pass features (in the same order as in the file) or not.
I found this issue while working on #2366.
The arrow writer was supposing that the columns were always in the sorted order. I just pushed a fix to reorder the arrays accordingly to the schema. It was failing for many datasets like squad |
https://github.com/huggingface/datasets/pull/2453 | Keep original features order | @lhoestq I do not understand when you say:
> It was failing for many datasets like squad
All the tests were green after my last commit. | When loading a Dataset from a JSON file whose column names are not sorted alphabetically, we should get the same column name order, whether we pass features (in the same order as in the file) or not.
I found this issue while working on #2366. | 26 | text: Keep original features order
When loading a Dataset from a JSON file whose column names are not sorted alphabetically, we should get the same column name order, whether we pass features (in the same order as in the file) or not.
I found this issue while working on #2366.
@lhoestq I do not understand when you say:
> It was failing for many datasets like squad
All the tests were green after my last commit. |
https://github.com/huggingface/datasets/pull/2453 | Keep original features order | > All the tests were green after my last commit.
Yes but loading the actual squad dataset was failing :/
| When loading a Dataset from a JSON file whose column names are not sorted alphabetically, we should get the same column name order, whether we pass features (in the same order as in the file) or not.
I found this issue while working on #2366. | 20 | text: Keep original features order
When loading a Dataset from a JSON file whose column names are not sorted alphabetically, we should get the same column name order, whether we pass features (in the same order as in the file) or not.
I found this issue while working on #2366.
> All the tests were green after my last commit.
Yes but loading the actual squad dataset was failing :/
|
https://github.com/huggingface/datasets/pull/2449 | Update `xor_tydi_qa` url to v1.1 | Just noticed while
```load_dataset('local_path/datastes/xor_tydi_qa')``` works,
```load_dataset('xor_tydi_qa')```
outputs an error:
`
FileNotFoundError: Couldn't find file at https://nlp.cs.washington.edu/xorqa/XORQA_site/data/xor_dev_retrieve_eng_span.jsonl
`
(the old url)
I tired clearing the cache `.cache/huggingface/modules` and `.cache/huggingface/datasets`, didn't work.
Anyone know how to fix this? Thanks. | The dataset is updated and the old url no longer works. So I updated it.
I faced a bug while trying to fix this. Documenting the solution here. Maybe we can add it to the doc (`CONTRIBUTING.md` and `ADD_NEW_DATASET.md`).
> And to make the command work without the ExpectedMoreDownloadedFiles error, you just need to use the --ignore_verifications flag.
https://github.com/huggingface/datasets/issues/2076#issuecomment-803904366 | 37 | text: Update `xor_tydi_qa` url to v1.1
The dataset is updated and the old url no longer works. So I updated it.
I faced a bug while trying to fix this. Documenting the solution here. Maybe we can add it to the doc (`CONTRIBUTING.md` and `ADD_NEW_DATASET.md`).
> And to make the command work without the ExpectedMoreDownloadedFiles error, you just need to use the --ignore_verifications flag.
https://github.com/huggingface/datasets/issues/2076#issuecomment-803904366
Just noticed while
```load_dataset('local_path/datastes/xor_tydi_qa')``` works,
```load_dataset('xor_tydi_qa')```
outputs an error:
`
FileNotFoundError: Couldn't find file at https://nlp.cs.washington.edu/xorqa/XORQA_site/data/xor_dev_retrieve_eng_span.jsonl
`
(the old url)
I tired clearing the cache `.cache/huggingface/modules` and `.cache/huggingface/datasets`, didn't work.
Anyone know how to fix this? Thanks. |
https://github.com/huggingface/datasets/pull/2449 | Update `xor_tydi_qa` url to v1.1 | It seems like the error is not on your end. By default, the lib tries to download the version of the dataset script that matches the version of the lib, and that version of the script is, in your case, broken because the old URL no longer works. Once this PR gets merged, you can wait for the new release or set `script_version` to `"master"` in `load_dataset` to get the fixed version of the script. | The dataset is updated and the old url no longer works. So I updated it.
I faced a bug while trying to fix this. Documenting the solution here. Maybe we can add it to the doc (`CONTRIBUTING.md` and `ADD_NEW_DATASET.md`).
> And to make the command work without the ExpectedMoreDownloadedFiles error, you just need to use the --ignore_verifications flag.
https://github.com/huggingface/datasets/issues/2076#issuecomment-803904366 | 75 | text: Update `xor_tydi_qa` url to v1.1
The dataset is updated and the old url no longer works. So I updated it.
I faced a bug while trying to fix this. Documenting the solution here. Maybe we can add it to the doc (`CONTRIBUTING.md` and `ADD_NEW_DATASET.md`).
> And to make the command work without the ExpectedMoreDownloadedFiles error, you just need to use the --ignore_verifications flag.
https://github.com/huggingface/datasets/issues/2076#issuecomment-803904366
It seems like the error is not on your end. By default, the lib tries to download the version of the dataset script that matches the version of the lib, and that version of the script is, in your case, broken because the old URL no longer works. Once this PR gets merged, you can wait for the new release or set `script_version` to `"master"` in `load_dataset` to get the fixed version of the script. |
https://github.com/huggingface/datasets/pull/2449 | Update `xor_tydi_qa` url to v1.1 | @mariosasko Thanks! It works now.
Pasting the docstring here for reference.
```
script_version (:class:`~utils.Version` or :obj:`str`, optional): Version of the dataset script to load:
- For canonical datasets in the `huggingface/datasets` library like "squad", the default version of the module is the local version fo the lib.
You can specify a different version from your local version of the lib (e.g. "master" or "1.2.0") but it might cause compatibility issues.
- For community provided datasets like "lhoestq/squad" that have their own git repository on the Datasets Hub, the default version "main" corresponds to the "main" branch.
You can specify a different version that the default "main" by using a commit sha or a git tag of the dataset repository.
```
Branch name didn't work, but commit sha works. | The dataset is updated and the old url no longer works. So I updated it.
I faced a bug while trying to fix this. Documenting the solution here. Maybe we can add it to the doc (`CONTRIBUTING.md` and `ADD_NEW_DATASET.md`).
> And to make the command work without the ExpectedMoreDownloadedFiles error, you just need to use the --ignore_verifications flag.
https://github.com/huggingface/datasets/issues/2076#issuecomment-803904366 | 128 | text: Update `xor_tydi_qa` url to v1.1
The dataset is updated and the old url no longer works. So I updated it.
I faced a bug while trying to fix this. Documenting the solution here. Maybe we can add it to the doc (`CONTRIBUTING.md` and `ADD_NEW_DATASET.md`).
> And to make the command work without the ExpectedMoreDownloadedFiles error, you just need to use the --ignore_verifications flag.
https://github.com/huggingface/datasets/issues/2076#issuecomment-803904366
@mariosasko Thanks! It works now.
Pasting the docstring here for reference.
```
script_version (:class:`~utils.Version` or :obj:`str`, optional): Version of the dataset script to load:
- For canonical datasets in the `huggingface/datasets` library like "squad", the default version of the module is the local version fo the lib.
You can specify a different version from your local version of the lib (e.g. "master" or "1.2.0") but it might cause compatibility issues.
- For community provided datasets like "lhoestq/squad" that have their own git repository on the Datasets Hub, the default version "main" corresponds to the "main" branch.
You can specify a different version that the default "main" by using a commit sha or a git tag of the dataset repository.
```
Branch name didn't work, but commit sha works. |
https://github.com/huggingface/datasets/pull/2449 | Update `xor_tydi_qa` url to v1.1 | Regarding the issue you mentioned about the `--ignore_verifications` flag, I think we should actually change the current behavior of the `--save_infos` flag to make it ignore the verifications as well, so that you don't need to specific `--ignore_verifications` in this case. | The dataset is updated and the old url no longer works. So I updated it.
I faced a bug while trying to fix this. Documenting the solution here. Maybe we can add it to the doc (`CONTRIBUTING.md` and `ADD_NEW_DATASET.md`).
> And to make the command work without the ExpectedMoreDownloadedFiles error, you just need to use the --ignore_verifications flag.
https://github.com/huggingface/datasets/issues/2076#issuecomment-803904366 | 41 | text: Update `xor_tydi_qa` url to v1.1
The dataset is updated and the old url no longer works. So I updated it.
I faced a bug while trying to fix this. Documenting the solution here. Maybe we can add it to the doc (`CONTRIBUTING.md` and `ADD_NEW_DATASET.md`).
> And to make the command work without the ExpectedMoreDownloadedFiles error, you just need to use the --ignore_verifications flag.
https://github.com/huggingface/datasets/issues/2076#issuecomment-803904366
Regarding the issue you mentioned about the `--ignore_verifications` flag, I think we should actually change the current behavior of the `--save_infos` flag to make it ignore the verifications as well, so that you don't need to specific `--ignore_verifications` in this case. |
https://github.com/huggingface/datasets/pull/2445 | Fix broken URLs for bn_hate_speech and covid_tweets_japanese | Thanks ! To fix the CI you just have to rename the dummy data file in the dummy_data.zip files | Closes #2388 | 19 | text: Fix broken URLs for bn_hate_speech and covid_tweets_japanese
Closes #2388
Thanks ! To fix the CI you just have to rename the dummy data file in the dummy_data.zip files |
https://github.com/huggingface/datasets/pull/2442 | add english language tags for ~100 datasets | Fixing the tags of all the datasets is out of scope for this PR so I'm merging even though the CI fails because of the missing tags | As discussed on Slack, I have manually checked for ~100 datasets that they have at least one subset in English. This information was missing so adding into the READMEs.
Note that I didn't check all the subsets so it's possible that some of the datasets have subsets in other languages than English... | 27 | text: add english language tags for ~100 datasets
As discussed on Slack, I have manually checked for ~100 datasets that they have at least one subset in English. This information was missing so adding into the READMEs.
Note that I didn't check all the subsets so it's possible that some of the datasets have subsets in other languages than English...
Fixing the tags of all the datasets is out of scope for this PR so I'm merging even though the CI fails because of the missing tags |
https://github.com/huggingface/datasets/pull/2437 | Better error message when using the wrong load_from_disk | We also have other cases where people are lost between Dataset and DatasetDict, maybe let's gather and solve them all here?
For instance, I remember that some people thought they would request a single element of a split but are calling this on a DatasetDict. Maybe here also a better error message when the split requested in not in the dict? pointing to the list of split and the fact that this is a datasetdict containing several datasets? | As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one. | 78 | text: Better error message when using the wrong load_from_disk
As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one.
We also have other cases where people are lost between Dataset and DatasetDict, maybe let's gather and solve them all here?
For instance, I remember that some people thought they would request a single element of a split but are calling this on a DatasetDict. Maybe here also a better error message when the split requested in not in the dict? pointing to the list of split and the fact that this is a datasetdict containing several datasets? |
https://github.com/huggingface/datasets/pull/2437 | Better error message when using the wrong load_from_disk | As a digression from the topic of this PR, IMHO I think that the difference between Dataset and DatasetDict is an additional abstraction complexity that confuses "typical" end users. I think a user expects a "Dataset" (whatever it contains multiple or a single split) and maybe it could be interesting to try to simplify the user-facing API as much as possible to hide this complexity from the end user.
I don't know your opinion about this, but it might be worth discussing...
For example, I really like the line of the solution of using the function `load_from_disk`, which hides the previous mentioned complexity and handles under the hood whether Dataset/DatasetDict instances should be created... | As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one. | 114 | text: Better error message when using the wrong load_from_disk
As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one.
As a digression from the topic of this PR, IMHO I think that the difference between Dataset and DatasetDict is an additional abstraction complexity that confuses "typical" end users. I think a user expects a "Dataset" (whatever it contains multiple or a single split) and maybe it could be interesting to try to simplify the user-facing API as much as possible to hide this complexity from the end user.
I don't know your opinion about this, but it might be worth discussing...
For example, I really like the line of the solution of using the function `load_from_disk`, which hides the previous mentioned complexity and handles under the hood whether Dataset/DatasetDict instances should be created... |
https://github.com/huggingface/datasets/pull/2437 | Better error message when using the wrong load_from_disk | I totally agree, I just haven't found a solution that doesn't imply major breaking changes x) | As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one. | 16 | text: Better error message when using the wrong load_from_disk
As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one.
I totally agree, I just haven't found a solution that doesn't imply major breaking changes x) |
https://github.com/huggingface/datasets/pull/2437 | Better error message when using the wrong load_from_disk | Yes I would also like to find a better solution. Do we have any solution actually? (even implying breaking changes)
Here is a proposal for discussion and refined (and potential abandon if it's not good enough):
- let's consider that a DatasetDict is also a Dataset with the various split concatenated one after the other
- let's disallow the use of integers in split names (probably not a very big breaking change)
- when you index with integers you access the examples progressively in split after the other is finished (in a deterministic order)
- when you index with strings/split name you have the same behavior as now (full backward compat)
- let's then also have all the methods of a Dataset on the DatasetDict | As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one. | 125 | text: Better error message when using the wrong load_from_disk
As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one.
Yes I would also like to find a better solution. Do we have any solution actually? (even implying breaking changes)
Here is a proposal for discussion and refined (and potential abandon if it's not good enough):
- let's consider that a DatasetDict is also a Dataset with the various split concatenated one after the other
- let's disallow the use of integers in split names (probably not a very big breaking change)
- when you index with integers you access the examples progressively in split after the other is finished (in a deterministic order)
- when you index with strings/split name you have the same behavior as now (full backward compat)
- let's then also have all the methods of a Dataset on the DatasetDict |
https://github.com/huggingface/datasets/pull/2437 | Better error message when using the wrong load_from_disk | The end goal would be to merge both `Dataset` and `DatasetDict` object in a single object that would be (pretty much totally) backward compatible with both. | As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one. | 26 | text: Better error message when using the wrong load_from_disk
As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one.
The end goal would be to merge both `Dataset` and `DatasetDict` object in a single object that would be (pretty much totally) backward compatible with both. |
https://github.com/huggingface/datasets/pull/2437 | Better error message when using the wrong load_from_disk | I like the direction :) I think it can make sense to concatenate them.
There are a few things that I we could discuss if we want to merge Dataset and DatasetDict:
1. what happens if you index by a string ? Does it return the column or the split ? We could disallow conflicts between column names and split names to avoid ambiguities. It can be surprising to be able to get a column or a split using the same indexing feature
```python
from datasets import load_dataset
dataset = load_dataset(...)
dataset["train"]
dataset["input_ids"]
```
2. what happens when you iterate over the object ? I guess it should iterate over the examples as a Dataset object, but a DatasetDict used to iterate over the splits as they are the dictionary keys. This is a breaking change that we can discuss.
Moreover regarding your points:
- integers are not allowed as split names already
- it's definitely doable to have all the methods. Maybe some of them like `train_test_split` that is currently only available for Dataset can be tweaked to work for a split dataset | As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one. | 184 | text: Better error message when using the wrong load_from_disk
As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one.
I like the direction :) I think it can make sense to concatenate them.
There are a few things that I we could discuss if we want to merge Dataset and DatasetDict:
1. what happens if you index by a string ? Does it return the column or the split ? We could disallow conflicts between column names and split names to avoid ambiguities. It can be surprising to be able to get a column or a split using the same indexing feature
```python
from datasets import load_dataset
dataset = load_dataset(...)
dataset["train"]
dataset["input_ids"]
```
2. what happens when you iterate over the object ? I guess it should iterate over the examples as a Dataset object, but a DatasetDict used to iterate over the splits as they are the dictionary keys. This is a breaking change that we can discuss.
Moreover regarding your points:
- integers are not allowed as split names already
- it's definitely doable to have all the methods. Maybe some of them like `train_test_split` that is currently only available for Dataset can be tweaked to work for a split dataset |
https://github.com/huggingface/datasets/pull/2437 | Better error message when using the wrong load_from_disk | Instead of suggesting the use of `Dataset.load_from_disk` and `DatasetDict.load_from_disk`, the error message now suggests to use `datasets.load_from_disk` directly | As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one. | 18 | text: Better error message when using the wrong load_from_disk
As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one.
Instead of suggesting the use of `Dataset.load_from_disk` and `DatasetDict.load_from_disk`, the error message now suggests to use `datasets.load_from_disk` directly |
https://github.com/huggingface/datasets/pull/2437 | Better error message when using the wrong load_from_disk | Merging the error message improvement, feel free to continue the discussion here or in a github issue | As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one. | 17 | text: Better error message when using the wrong load_from_disk
As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one.
Merging the error message improvement, feel free to continue the discussion here or in a github issue |
https://github.com/huggingface/datasets/pull/2435 | Insert Extractive QA templates for SQuAD-like datasets | hi @lhoestq @SBrandeis i've now added the missing YAML tags, so this PR should be good to go :) | This PR adds task templates for 9 SQuAD-like templates with the following properties:
* 1 config
* A schema that matches the `squad` one (i.e. same column names, especially for the nested `answers` column because the current implementation does not support casting with mismatched columns. see #2434)
* Less than 20GB (my laptop can't handle more right now)
The aim of this PR is to provide a few datasets to experiment with the task template integration in other libraries / services.
PR #2429 should be merged before this one.
cc @abhi1thakur | 19 | text: Insert Extractive QA templates for SQuAD-like datasets
This PR adds task templates for 9 SQuAD-like templates with the following properties:
* 1 config
* A schema that matches the `squad` one (i.e. same column names, especially for the nested `answers` column because the current implementation does not support casting with mismatched columns. see #2434)
* Less than 20GB (my laptop can't handle more right now)
The aim of this PR is to provide a few datasets to experiment with the task template integration in other libraries / services.
PR #2429 should be merged before this one.
cc @abhi1thakur
hi @lhoestq @SBrandeis i've now added the missing YAML tags, so this PR should be good to go :) |
https://github.com/huggingface/datasets/pull/2435 | Insert Extractive QA templates for SQuAD-like datasets | urgh, the windows tests are failing because of encoding issues π’
```
dataset_name = 'squad_kor_v1'
@pytest.mark.parametrize("dataset_name", get_changed_datasets(repo_path))
def test_changed_dataset_card(dataset_name):
card_path = repo_path / "datasets" / dataset_name / "README.md"
assert card_path.exists()
error_messages = []
try:
ReadMe.from_readme(card_path)
except Exception as readme_error:
error_messages.append(f"The following issues have been found in the dataset cards:\nREADME:\n{readme_error}")
try:
DatasetMetadata.from_readme(card_path)
except Exception as metadata_error:
error_messages.append(
f"The following issues have been found in the dataset cards:\nYAML tags:\n{metadata_error}"
)
if error_messages:
> raise ValueError("\n".join(error_messages))
E ValueError: The following issues have been found in the dataset cards:
E README:
E 'charmap' codec can't decode byte 0x90 in position 2283: character maps to <undefined>
E The following issues have been found in the dataset cards:
E YAML tags:
E 'charmap' codec can't decode byte 0x90 in position 2283: character maps to <undefined>
``` | This PR adds task templates for 9 SQuAD-like templates with the following properties:
* 1 config
* A schema that matches the `squad` one (i.e. same column names, especially for the nested `answers` column because the current implementation does not support casting with mismatched columns. see #2434)
* Less than 20GB (my laptop can't handle more right now)
The aim of this PR is to provide a few datasets to experiment with the task template integration in other libraries / services.
PR #2429 should be merged before this one.
cc @abhi1thakur | 130 | text: Insert Extractive QA templates for SQuAD-like datasets
This PR adds task templates for 9 SQuAD-like templates with the following properties:
* 1 config
* A schema that matches the `squad` one (i.e. same column names, especially for the nested `answers` column because the current implementation does not support casting with mismatched columns. see #2434)
* Less than 20GB (my laptop can't handle more right now)
The aim of this PR is to provide a few datasets to experiment with the task template integration in other libraries / services.
PR #2429 should be merged before this one.
cc @abhi1thakur
urgh, the windows tests are failing because of encoding issues π’
```
dataset_name = 'squad_kor_v1'
@pytest.mark.parametrize("dataset_name", get_changed_datasets(repo_path))
def test_changed_dataset_card(dataset_name):
card_path = repo_path / "datasets" / dataset_name / "README.md"
assert card_path.exists()
error_messages = []
try:
ReadMe.from_readme(card_path)
except Exception as readme_error:
error_messages.append(f"The following issues have been found in the dataset cards:\nREADME:\n{readme_error}")
try:
DatasetMetadata.from_readme(card_path)
except Exception as metadata_error:
error_messages.append(
f"The following issues have been found in the dataset cards:\nYAML tags:\n{metadata_error}"
)
if error_messages:
> raise ValueError("\n".join(error_messages))
E ValueError: The following issues have been found in the dataset cards:
E README:
E 'charmap' codec can't decode byte 0x90 in position 2283: character maps to <undefined>
E The following issues have been found in the dataset cards:
E YAML tags:
E 'charmap' codec can't decode byte 0x90 in position 2283: character maps to <undefined>
``` |
https://github.com/huggingface/datasets/pull/2435 | Insert Extractive QA templates for SQuAD-like datasets | Seems like the encoding issues on windows is also being tackled in #2418 - will see if this solves the problem in the current PR | This PR adds task templates for 9 SQuAD-like templates with the following properties:
* 1 config
* A schema that matches the `squad` one (i.e. same column names, especially for the nested `answers` column because the current implementation does not support casting with mismatched columns. see #2434)
* Less than 20GB (my laptop can't handle more right now)
The aim of this PR is to provide a few datasets to experiment with the task template integration in other libraries / services.
PR #2429 should be merged before this one.
cc @abhi1thakur | 25 | text: Insert Extractive QA templates for SQuAD-like datasets
This PR adds task templates for 9 SQuAD-like templates with the following properties:
* 1 config
* A schema that matches the `squad` one (i.e. same column names, especially for the nested `answers` column because the current implementation does not support casting with mismatched columns. see #2434)
* Less than 20GB (my laptop can't handle more right now)
The aim of this PR is to provide a few datasets to experiment with the task template integration in other libraries / services.
PR #2429 should be merged before this one.
cc @abhi1thakur
Seems like the encoding issues on windows is also being tackled in #2418 - will see if this solves the problem in the current PR |
https://github.com/huggingface/datasets/pull/2430 | Add version-specific BibTeX | For info:
- The one automatically generated by Zenodo is version-specific, and a new one will be generated after each release.
- Zenodo has also generated a project-specific DOI (they call it *Concept DOI* as opposed to *Version DOI*), but currently this only redirects to the DOI page of the latest version.
- All the information automatically generated by Zenodo can be corrected/customized if necessary.
- If we decide to correct/update metadata, take into account that there are the following fields (among others): Authors, Contributors, Title, Description, Keywords, Additional Notes, License,...
According to Zenodo: https://help.zenodo.org/#versioning
> **Which DOI should I use in citations?**
>
> You should normally always use the DOI for the specific version of your record in citations. This is to ensure that other researchers can access the exact research artefact you used for reproducibility. By default, Zenodo uses the specific version to generate citations.
>
> You can use the Concept DOI representing all versions in citations when it is desirable to cite an evolving research artifact, without being specific about the version. | As pointed out by @lhoestq in #2411, after the creation of the Zenodo DOI for Datasets, a new BibTeX entry is created with each release.
This PR adds a version-specific BibTeX entry, besides the existing one which is generic for the project.
See version-specific BibTeX entry here: https://zenodo.org/record/4817769/export/hx#.YLSyd6j7RPY | 177 | text: Add version-specific BibTeX
As pointed out by @lhoestq in #2411, after the creation of the Zenodo DOI for Datasets, a new BibTeX entry is created with each release.
This PR adds a version-specific BibTeX entry, besides the existing one which is generic for the project.
See version-specific BibTeX entry here: https://zenodo.org/record/4817769/export/hx#.YLSyd6j7RPY
For info:
- The one automatically generated by Zenodo is version-specific, and a new one will be generated after each release.
- Zenodo has also generated a project-specific DOI (they call it *Concept DOI* as opposed to *Version DOI*), but currently this only redirects to the DOI page of the latest version.
- All the information automatically generated by Zenodo can be corrected/customized if necessary.
- If we decide to correct/update metadata, take into account that there are the following fields (among others): Authors, Contributors, Title, Description, Keywords, Additional Notes, License,...
According to Zenodo: https://help.zenodo.org/#versioning
> **Which DOI should I use in citations?**
>
> You should normally always use the DOI for the specific version of your record in citations. This is to ensure that other researchers can access the exact research artefact you used for reproducibility. By default, Zenodo uses the specific version to generate citations.
>
> You can use the Concept DOI representing all versions in citations when it is desirable to cite an evolving research artifact, without being specific about the version. |
https://github.com/huggingface/datasets/pull/2430 | Add version-specific BibTeX | Thanks for the details ! As zenodo says we should probably just show the versioned DOI. And we can remove the old citation. | As pointed out by @lhoestq in #2411, after the creation of the Zenodo DOI for Datasets, a new BibTeX entry is created with each release.
This PR adds a version-specific BibTeX entry, besides the existing one which is generic for the project.
See version-specific BibTeX entry here: https://zenodo.org/record/4817769/export/hx#.YLSyd6j7RPY | 23 | text: Add version-specific BibTeX
As pointed out by @lhoestq in #2411, after the creation of the Zenodo DOI for Datasets, a new BibTeX entry is created with each release.
This PR adds a version-specific BibTeX entry, besides the existing one which is generic for the project.
See version-specific BibTeX entry here: https://zenodo.org/record/4817769/export/hx#.YLSyd6j7RPY
Thanks for the details ! As zenodo says we should probably just show the versioned DOI. And we can remove the old citation. |
https://github.com/huggingface/datasets/pull/2430 | Add version-specific BibTeX | I have removed the old citation.
What about the new one? Should we customize it? I have fixed some author names (replaced nickname with first and family names). Note that the list of authors is created automatically by Zenodo from this list: https://github.com/huggingface/datasets/graphs/contributors
I do not know if this default automatic list of authors is what we want to show in the citation... | As pointed out by @lhoestq in #2411, after the creation of the Zenodo DOI for Datasets, a new BibTeX entry is created with each release.
This PR adds a version-specific BibTeX entry, besides the existing one which is generic for the project.
See version-specific BibTeX entry here: https://zenodo.org/record/4817769/export/hx#.YLSyd6j7RPY | 63 | text: Add version-specific BibTeX
As pointed out by @lhoestq in #2411, after the creation of the Zenodo DOI for Datasets, a new BibTeX entry is created with each release.
This PR adds a version-specific BibTeX entry, besides the existing one which is generic for the project.
See version-specific BibTeX entry here: https://zenodo.org/record/4817769/export/hx#.YLSyd6j7RPY
I have removed the old citation.
What about the new one? Should we customize it? I have fixed some author names (replaced nickname with first and family names). Note that the list of authors is created automatically by Zenodo from this list: https://github.com/huggingface/datasets/graphs/contributors
I do not know if this default automatic list of authors is what we want to show in the citation... |
https://github.com/huggingface/datasets/pull/2429 | Rename QuestionAnswering template to QuestionAnsweringExtractive | > I like having "extractive" in the name to make things explicit. However this creates an inconsistency with transformers.
>
> See
> https://huggingface.co/transformers/task_summary.html#extractive-question-answering
>
> But this is minor IMO and I'm ok with this renaming
yes i chose this convention because it allows us to match the `QuestionAnsweringXxx` naming and i think it's better to have `task_name-subtask_name` should auto-complete ever become part of the Hub :) | Following the discussion with @thomwolf in #2255, this PR renames the QA template to distinguish extractive vs abstractive QA. The abstractive template will be added in a future PR. | 68 | text: Rename QuestionAnswering template to QuestionAnsweringExtractive
Following the discussion with @thomwolf in #2255, this PR renames the QA template to distinguish extractive vs abstractive QA. The abstractive template will be added in a future PR.
> I like having "extractive" in the name to make things explicit. However this creates an inconsistency with transformers.
>
> See
> https://huggingface.co/transformers/task_summary.html#extractive-question-answering
>
> But this is minor IMO and I'm ok with this renaming
yes i chose this convention because it allows us to match the `QuestionAnsweringXxx` naming and i think it's better to have `task_name-subtask_name` should auto-complete ever become part of the Hub :) |
https://github.com/huggingface/datasets/pull/2425 | Fix Docstring Mistake: dataset vs. metric | The CI fail is unrelated to this PR, and it has been fixed on master, merging :) | PR to fix #2412 | 17 | text: Fix Docstring Mistake: dataset vs. metric
PR to fix #2412
The CI fail is unrelated to this PR, and it has been fixed on master, merging :) |
https://github.com/huggingface/datasets/pull/2423 | add `desc` in `map` for `DatasetDict` object | @lhoestq, can we release this feature if you guys are planning for any patch release for Datasets. It'll slow down [#11927](https://github.com/huggingface/transformers/pull/11927) otherwise :/ | `desc` in `map` currently only works with `Dataset` objects. This PR adds support for `DatasetDict` objects as well | 23 | text: add `desc` in `map` for `DatasetDict` object
`desc` in `map` currently only works with `Dataset` objects. This PR adds support for `DatasetDict` objects as well
@lhoestq, can we release this feature if you guys are planning for any patch release for Datasets. It'll slow down [#11927](https://github.com/huggingface/transformers/pull/11927) otherwise :/ |
https://github.com/huggingface/datasets/pull/2423 | add `desc` in `map` for `DatasetDict` object | Sure definitely, having a discrepancy between Dataset.map and DatasetDict.map is an issue that we should fix and include in a patch release. Will do it in the coming days | `desc` in `map` currently only works with `Dataset` objects. This PR adds support for `DatasetDict` objects as well | 29 | text: add `desc` in `map` for `DatasetDict` object
`desc` in `map` currently only works with `Dataset` objects. This PR adds support for `DatasetDict` objects as well
Sure definitely, having a discrepancy between Dataset.map and DatasetDict.map is an issue that we should fix and include in a patch release. Will do it in the coming days |
https://github.com/huggingface/datasets/pull/2419 | adds license information for DailyDialog. | Thanks! Can you also add it as metadata in the YAML block at the top of the file?
Should be in the form:
```
licenses:
- cc-by-sa-4.0
``` | 28 | text: adds license information for DailyDialog.
Thanks! Can you also add it as metadata in the YAML block at the top of the file?
Should be in the form:
```
licenses:
- cc-by-sa-4.0
``` |
|
https://github.com/huggingface/datasets/pull/2419 | adds license information for DailyDialog. | seems like we need to add all the other tags ?
```
if error_messages:
> raise ValueError("\n".join(error_messages))
E ValueError: The following issues have been found in the dataset cards:
E YAML tags:
E __init__() missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'languages', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
``` | 49 | text: adds license information for DailyDialog.
seems like we need to add all the other tags ?
```
if error_messages:
> raise ValueError("\n".join(error_messages))
E ValueError: The following issues have been found in the dataset cards:
E YAML tags:
E __init__() missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'languages', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
``` |
|
https://github.com/huggingface/datasets/pull/2419 | adds license information for DailyDialog. | Looks like CircleCI has an incident. Let's wait for it to be working again and make sure the CI is green | 21 | text: adds license information for DailyDialog.
Looks like CircleCI has an incident. Let's wait for it to be working again and make sure the CI is green |
|
https://github.com/huggingface/datasets/pull/2418 | add utf-8 while reading README | Can you please add encoding to this line as well to fix the issue (and maybe replace `path.open(...)` with `open(path, ...)`)?
https://github.com/huggingface/datasets/blob/7bee4be44706a59b084b9b69c4cd00f73ee72f76/src/datasets/utils/metadata.py#L58 | It was causing tests to fail in Windows (see #2416). In Windows, the default encoding is CP1252 which is unable to decode the character byte 0x9d | 22 | text: add utf-8 while reading README
It was causing tests to fail in Windows (see #2416). In Windows, the default encoding is CP1252 which is unable to decode the character byte 0x9d
Can you please add encoding to this line as well to fix the issue (and maybe replace `path.open(...)` with `open(path, ...)`)?
https://github.com/huggingface/datasets/blob/7bee4be44706a59b084b9b69c4cd00f73ee72f76/src/datasets/utils/metadata.py#L58 |
https://github.com/huggingface/datasets/pull/2418 | add utf-8 while reading README | Sure, in fact even I was thinking of adding this in order to maintain the consistency! | It was causing tests to fail in Windows (see #2416). In Windows, the default encoding is CP1252 which is unable to decode the character byte 0x9d | 16 | text: add utf-8 while reading README
It was causing tests to fail in Windows (see #2416). In Windows, the default encoding is CP1252 which is unable to decode the character byte 0x9d
Sure, in fact even I was thinking of adding this in order to maintain the consistency! |
https://github.com/huggingface/datasets/pull/2416 | Add KLUE dataset | I'm not sure why I got error like below when I auto-generate dummy data "mrc"
```
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 0
Keys should be unique and deterministic in nature
``` | Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks).
Please let me know if there's anything missing in the code or README.
Thanks!
| 35 | text: Add KLUE dataset
Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks).
Please let me know if there's anything missing in the code or README.
Thanks!
I'm not sure why I got error like below when I auto-generate dummy data "mrc"
```
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 0
Keys should be unique and deterministic in nature
``` |
https://github.com/huggingface/datasets/pull/2416 | Add KLUE dataset | > I'm not sure why I got error like below when I auto-generate dummy data "mrc"
>
> ```
> datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
> Found duplicate Key: 0
> Keys should be unique and deterministic in nature
> ```
Please check out the suggestion below. I think it might be a cause. | Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks).
Please let me know if there's anything missing in the code or README.
Thanks!
| 55 | text: Add KLUE dataset
Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks).
Please let me know if there's anything missing in the code or README.
Thanks!
> I'm not sure why I got error like below when I auto-generate dummy data "mrc"
>
> ```
> datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
> Found duplicate Key: 0
> Keys should be unique and deterministic in nature
> ```
Please check out the suggestion below. I think it might be a cause. |