html_url
stringlengths 47
49
| title
stringlengths 4
111
| comments
stringlengths 71
20.4k
| body
stringlengths 0
12.9k
⌀ | comment_length_in_words
int64 16
1.61k
| text
stringlengths 100
20.5k
|
---|---|---|---|---|---|
https://github.com/huggingface/datasets/pull/552 | Add multiprocessing | Hi,
when I use the multiprocessing in ```.map```:
```
dataset = load_dataset("text", data_files=file_path, split="train")
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True, num_proc=16)
dataset.set_format(type='torch', columns=['input_ids'])
```
I get the following error:
```
Traceback (most recent call last):
File "src/run.py", line 373, in <module>
main()
File "src/run.py", line 295, in main
get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None
File "src/run.py", line 153, in get_dataset
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
File "/root/miniconda3/envs/py3.8/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1287, in map
transformed_shards = [r.get() for r in results]
File "/root/miniconda3/envs/py3.8/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1287, in <listcomp>
transformed_shards = [r.get() for r in results]
File "/root/miniconda3/envs/py3.8/lib/python3.8/multiprocessing/pool.py", line 771, in get
raise self._value
put(task)
File "/root/miniconda3/envs/py3.8/lib/python3.8/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/root/miniconda3/envs/py3.8/lib/python3.8/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
AttributeError: Can't pickle local object 'get_dataset.<locals>.<lambda>'
```
I think you should use [pathos](https://github.com/uqfoundation/pathos) to pickle the lambda function and some others!
I change the 30 line of src/datasets/arrow_dataset.py as following:
```
# 30 line: from multiprocessing import Pool, RLock
import pathos
from pathos.multiprocessing import Pool
from multiprocessing import RLock
```
and it works! | Adding multiprocessing to `.map`
It works in 3 steps:
- shard the dataset in `num_proc` shards
- spawn one process per shard and call `map` on them
- concatenate the resulting datasets
Example of usage:
```python
from nlp import load_dataset
dataset = load_dataset("squad", split="train")
def function(x):
return {"lowered": x.lower()}
processed = d.map(
function,
input_columns=["context"],
num_proc=4,
cache_file_name="playground/tmp.arrow",
load_from_cache_file=False
)
```
Here it writes 4 files depending on the process rank:
- `playground/tmp_00000_of_00004.arrow`
- `playground/tmp_00001_of_00004.arrow`
- `playground/tmp_00002_of_00004.arrow`
- `playground/tmp_00003_of_00004.arrow`
The suffix format can be specified by the user.
If the `cache_file_name` is not specified, it writes into separated files depending on the fingerprint, as usual.
I still need to:
- write tests for this
- try to improve the logging (currently it shows 4 progress bars, but if one finishes before the others, then the following messages are written over the progress bars)
| 173 | text: Add multiprocessing
Adding multiprocessing to `.map`
It works in 3 steps:
- shard the dataset in `num_proc` shards
- spawn one process per shard and call `map` on them
- concatenate the resulting datasets
Example of usage:
```python
from nlp import load_dataset
dataset = load_dataset("squad", split="train")
def function(x):
return {"lowered": x.lower()}
processed = d.map(
function,
input_columns=["context"],
num_proc=4,
cache_file_name="playground/tmp.arrow",
load_from_cache_file=False
)
```
Here it writes 4 files depending on the process rank:
- `playground/tmp_00000_of_00004.arrow`
- `playground/tmp_00001_of_00004.arrow`
- `playground/tmp_00002_of_00004.arrow`
- `playground/tmp_00003_of_00004.arrow`
The suffix format can be specified by the user.
If the `cache_file_name` is not specified, it writes into separated files depending on the fingerprint, as usual.
I still need to:
- write tests for this
- try to improve the logging (currently it shows 4 progress bars, but if one finishes before the others, then the following messages are written over the progress bars)
Hi,
when I use the multiprocessing in ```.map```:
```
dataset = load_dataset("text", data_files=file_path, split="train")
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True, num_proc=16)
dataset.set_format(type='torch', columns=['input_ids'])
```
I get the following error:
```
Traceback (most recent call last):
File "src/run.py", line 373, in <module>
main()
File "src/run.py", line 295, in main
get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None
File "src/run.py", line 153, in get_dataset
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
File "/root/miniconda3/envs/py3.8/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1287, in map
transformed_shards = [r.get() for r in results]
File "/root/miniconda3/envs/py3.8/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1287, in <listcomp>
transformed_shards = [r.get() for r in results]
File "/root/miniconda3/envs/py3.8/lib/python3.8/multiprocessing/pool.py", line 771, in get
raise self._value
put(task)
File "/root/miniconda3/envs/py3.8/lib/python3.8/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/root/miniconda3/envs/py3.8/lib/python3.8/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
AttributeError: Can't pickle local object 'get_dataset.<locals>.<lambda>'
```
I think you should use [pathos](https://github.com/uqfoundation/pathos) to pickle the lambda function and some others!
I change the 30 line of src/datasets/arrow_dataset.py as following:
```
# 30 line: from multiprocessing import Pool, RLock
import pathos
from pathos.multiprocessing import Pool
from multiprocessing import RLock
```
and it works! |
https://github.com/huggingface/datasets/pull/552 | Add multiprocessing | Are you using a tokenizer ?
Did you try to set `TOKENIZERS_PARALLELISM=false` ?
Feel free to discuss it in #620 , we're discussing this issue | Adding multiprocessing to `.map`
It works in 3 steps:
- shard the dataset in `num_proc` shards
- spawn one process per shard and call `map` on them
- concatenate the resulting datasets
Example of usage:
```python
from nlp import load_dataset
dataset = load_dataset("squad", split="train")
def function(x):
return {"lowered": x.lower()}
processed = d.map(
function,
input_columns=["context"],
num_proc=4,
cache_file_name="playground/tmp.arrow",
load_from_cache_file=False
)
```
Here it writes 4 files depending on the process rank:
- `playground/tmp_00000_of_00004.arrow`
- `playground/tmp_00001_of_00004.arrow`
- `playground/tmp_00002_of_00004.arrow`
- `playground/tmp_00003_of_00004.arrow`
The suffix format can be specified by the user.
If the `cache_file_name` is not specified, it writes into separated files depending on the fingerprint, as usual.
I still need to:
- write tests for this
- try to improve the logging (currently it shows 4 progress bars, but if one finishes before the others, then the following messages are written over the progress bars)
| 25 | text: Add multiprocessing
Adding multiprocessing to `.map`
It works in 3 steps:
- shard the dataset in `num_proc` shards
- spawn one process per shard and call `map` on them
- concatenate the resulting datasets
Example of usage:
```python
from nlp import load_dataset
dataset = load_dataset("squad", split="train")
def function(x):
return {"lowered": x.lower()}
processed = d.map(
function,
input_columns=["context"],
num_proc=4,
cache_file_name="playground/tmp.arrow",
load_from_cache_file=False
)
```
Here it writes 4 files depending on the process rank:
- `playground/tmp_00000_of_00004.arrow`
- `playground/tmp_00001_of_00004.arrow`
- `playground/tmp_00002_of_00004.arrow`
- `playground/tmp_00003_of_00004.arrow`
The suffix format can be specified by the user.
If the `cache_file_name` is not specified, it writes into separated files depending on the fingerprint, as usual.
I still need to:
- write tests for this
- try to improve the logging (currently it shows 4 progress bars, but if one finishes before the others, then the following messages are written over the progress bars)
Are you using a tokenizer ?
Did you try to set `TOKENIZERS_PARALLELISM=false` ?
Feel free to discuss it in #620 , we're discussing this issue |
https://github.com/huggingface/datasets/pull/550 | [BUGFIX] Solving mismatched checksum issue for the LinCE dataset (#539) | Thanks a lot for that!
The line you are mentioning is a bug indeed, do you mind fixing it at the same time? | Hi,
I have added the updated `dataset_infos.json` file for the LinCE benchmark. This update is to fix the mismatched checksum bug #539 for one of the datasets in the LinCE benchmark. To update the file, I run this command from the nlp root directory:
```
python nlp-cli test ./datasets/lince --save_infos --all_configs
```
**NOTE**: I needed to change [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/commands/dummy_data.py#L8) from: `from .utils.logging import get_logger` to `from nlp.utils.logging import get_logger`, otherwise the script was not able to import `get_logger`. However, I did not include that in this PR since that could have been just my environment (and another PR could be fixing this already if it is actually an issue). | 23 | text: [BUGFIX] Solving mismatched checksum issue for the LinCE dataset (#539)
Hi,
I have added the updated `dataset_infos.json` file for the LinCE benchmark. This update is to fix the mismatched checksum bug #539 for one of the datasets in the LinCE benchmark. To update the file, I run this command from the nlp root directory:
```
python nlp-cli test ./datasets/lince --save_infos --all_configs
```
**NOTE**: I needed to change [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/commands/dummy_data.py#L8) from: `from .utils.logging import get_logger` to `from nlp.utils.logging import get_logger`, otherwise the script was not able to import `get_logger`. However, I did not include that in this PR since that could have been just my environment (and another PR could be fixing this already if it is actually an issue).
Thanks a lot for that!
The line you are mentioning is a bug indeed, do you mind fixing it at the same time? |
https://github.com/huggingface/datasets/pull/550 | [BUGFIX] Solving mismatched checksum issue for the LinCE dataset (#539) | No worries!
I pushed right away the fix, but then I realized that the master branch already had it, so I ended up merging the master branch with lince locally and then overwriting the previous commit in origin/lince. Hopefully, this is not too messy :)
| Hi,
I have added the updated `dataset_infos.json` file for the LinCE benchmark. This update is to fix the mismatched checksum bug #539 for one of the datasets in the LinCE benchmark. To update the file, I run this command from the nlp root directory:
```
python nlp-cli test ./datasets/lince --save_infos --all_configs
```
**NOTE**: I needed to change [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/commands/dummy_data.py#L8) from: `from .utils.logging import get_logger` to `from nlp.utils.logging import get_logger`, otherwise the script was not able to import `get_logger`. However, I did not include that in this PR since that could have been just my environment (and another PR could be fixing this already if it is actually an issue). | 45 | text: [BUGFIX] Solving mismatched checksum issue for the LinCE dataset (#539)
Hi,
I have added the updated `dataset_infos.json` file for the LinCE benchmark. This update is to fix the mismatched checksum bug #539 for one of the datasets in the LinCE benchmark. To update the file, I run this command from the nlp root directory:
```
python nlp-cli test ./datasets/lince --save_infos --all_configs
```
**NOTE**: I needed to change [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/commands/dummy_data.py#L8) from: `from .utils.logging import get_logger` to `from nlp.utils.logging import get_logger`, otherwise the script was not able to import `get_logger`. However, I did not include that in this PR since that could have been just my environment (and another PR could be fixing this already if it is actually an issue).
No worries!
I pushed right away the fix, but then I realized that the master branch already had it, so I ended up merging the master branch with lince locally and then overwriting the previous commit in origin/lince. Hopefully, this is not too messy :)
|
https://github.com/huggingface/datasets/pull/549 | Fix bleurt logging import | That’s a good point that we started to discuss internally as well. We should pin the dataset en metrics code by default indeed.
Let’s update this in the coming release. | Bleurt started throwing an error in some code we have.
This looks like the fix but...
It's also unnerving that even a prebuilt docker image with pinned versions can be working 1 day and then fail the next (especially for production systems).
Any way for us to pin your metrics code so that they are guaranteed not to to change and possibly fail on repository changes?
Thanks (and also for your continued work on the lib...) | 30 | text: Fix bleurt logging import
Bleurt started throwing an error in some code we have.
This looks like the fix but...
It's also unnerving that even a prebuilt docker image with pinned versions can be working 1 day and then fail the next (especially for production systems).
Any way for us to pin your metrics code so that they are guaranteed not to to change and possibly fail on repository changes?
Thanks (and also for your continued work on the lib...)
That’s a good point that we started to discuss internally as well. We should pin the dataset en metrics code by default indeed.
Let’s update this in the coming release. |
https://github.com/huggingface/datasets/pull/549 | Fix bleurt logging import | Ok closed this with #567 and we are working on a more general solution to pin dataset version in #562 (should be in the coming release). | Bleurt started throwing an error in some code we have.
This looks like the fix but...
It's also unnerving that even a prebuilt docker image with pinned versions can be working 1 day and then fail the next (especially for production systems).
Any way for us to pin your metrics code so that they are guaranteed not to to change and possibly fail on repository changes?
Thanks (and also for your continued work on the lib...) | 26 | text: Fix bleurt logging import
Bleurt started throwing an error in some code we have.
This looks like the fix but...
It's also unnerving that even a prebuilt docker image with pinned versions can be working 1 day and then fail the next (especially for production systems).
Any way for us to pin your metrics code so that they are guaranteed not to to change and possibly fail on repository changes?
Thanks (and also for your continued work on the lib...)
Ok closed this with #567 and we are working on a more general solution to pin dataset version in #562 (should be in the coming release). |
https://github.com/huggingface/datasets/pull/548 | [Breaking] Switch text loading to multi-threaded PyArrow loading | Awesome !
Also I was wondering if we should try to make the hashing of the `data_files` faster (it is used to build the cache directory of datasets like `text` or `json`). Right now it reads each file and hashes all of its data. We could simply hash the path and some metadata including the `time last modified` tag no ? Apparently we can get this tag with `os.path.getmtime(path)` | Test if we can get better performances for large-scale text datasets by using multi-threaded text file loading based on Apache Arrow multi-threaded CSV loader.
If it works ok, it would fix #546.
**Breaking change**:
The text lines now do not include final line-breaks anymore. | 69 | text: [Breaking] Switch text loading to multi-threaded PyArrow loading
Test if we can get better performances for large-scale text datasets by using multi-threaded text file loading based on Apache Arrow multi-threaded CSV loader.
If it works ok, it would fix #546.
**Breaking change**:
The text lines now do not include final line-breaks anymore.
Awesome !
Also I was wondering if we should try to make the hashing of the `data_files` faster (it is used to build the cache directory of datasets like `text` or `json`). Right now it reads each file and hashes all of its data. We could simply hash the path and some metadata including the `time last modified` tag no ? Apparently we can get this tag with `os.path.getmtime(path)` |
https://github.com/huggingface/datasets/pull/540 | [BUGFIX] Fix Race Dataset Checksum bug | I'm not sure this would fix #537 .
However your point about the missing `middle` data is right and we probably want to include these data as well.
Do you think it would we worth having different configurations for this dataset for users who want to only load part of it (`high school` or `middle` or `all`) ? | In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :)
Moreover, I have added some descriptions. | 58 | text: [BUGFIX] Fix Race Dataset Checksum bug
In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :)
Moreover, I have added some descriptions.
I'm not sure this would fix #537 .
However your point about the missing `middle` data is right and we probably want to include these data as well.
Do you think it would we worth having different configurations for this dataset for users who want to only load part of it (`high school` or `middle` or `all`) ? |
https://github.com/huggingface/datasets/pull/540 | [BUGFIX] Fix Race Dataset Checksum bug | This has fixed #537 at least on my machine hahaha.
Nice point! I think it would totally worth it :) What the best implementation approach would you suggest?
Would it be possible to have `high school`, `middle` and `all` inside each portion of `train`, `validation` and `test`? Would this make sense? | In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :)
Moreover, I have added some descriptions. | 51 | text: [BUGFIX] Fix Race Dataset Checksum bug
In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :)
Moreover, I have added some descriptions.
This has fixed #537 at least on my machine hahaha.
Nice point! I think it would totally worth it :) What the best implementation approach would you suggest?
Would it be possible to have `high school`, `middle` and `all` inside each portion of `train`, `validation` and `test`? Would this make sense? |
https://github.com/huggingface/datasets/pull/540 | [BUGFIX] Fix Race Dataset Checksum bug | I think we could have one dataset configuration for `high school`, one for `middle` and one for `all`.
You just need to add
```python
BUILDER_CONFIGS = [
nlp.BuilderConfig(
name="high school",
description="insert description here",
),
nlp.BuilderConfig(
name="middle",
description="insert description here",
),
nlp.BuilderConfig(
name="all",
description="insert description here",
),
]
```
as a class attribute for the `Race` class.
Then in `generate_examples` you can check the value of `self.config.name` and choose which files to include when generating examples.
You can check [mlsum](https://github.com/huggingface/nlp/blob/master/datasets/mlsum/mlsum.py) for example if you want to see how it done in general, it's a dataset that has five configurations, and each config has train/val/test splits. | In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :)
Moreover, I have added some descriptions. | 104 | text: [BUGFIX] Fix Race Dataset Checksum bug
In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :)
Moreover, I have added some descriptions.
I think we could have one dataset configuration for `high school`, one for `middle` and one for `all`.
You just need to add
```python
BUILDER_CONFIGS = [
nlp.BuilderConfig(
name="high school",
description="insert description here",
),
nlp.BuilderConfig(
name="middle",
description="insert description here",
),
nlp.BuilderConfig(
name="all",
description="insert description here",
),
]
```
as a class attribute for the `Race` class.
Then in `generate_examples` you can check the value of `self.config.name` and choose which files to include when generating examples.
You can check [mlsum](https://github.com/huggingface/nlp/blob/master/datasets/mlsum/mlsum.py) for example if you want to see how it done in general, it's a dataset that has five configurations, and each config has train/val/test splits. |
https://github.com/huggingface/datasets/pull/540 | [BUGFIX] Fix Race Dataset Checksum bug | Hi @lhoestq sorry for the delay in addressing your comments. Thanks for your assistance :)
You were correct as well, as I was using the script without the `datasets/race/dataset_infos.json` file, it did not verify the checksum. I already fix it as well :)
I managed to get everything running smoothly by now. Please let me know if you think that I could improve my solution | In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :)
Moreover, I have added some descriptions. | 65 | text: [BUGFIX] Fix Race Dataset Checksum bug
In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :)
Moreover, I have added some descriptions.
Hi @lhoestq sorry for the delay in addressing your comments. Thanks for your assistance :)
You were correct as well, as I was using the script without the `datasets/race/dataset_infos.json` file, it did not verify the checksum. I already fix it as well :)
I managed to get everything running smoothly by now. Please let me know if you think that I could improve my solution |
https://github.com/huggingface/datasets/pull/536 | Fingerprint | I changed the way I implemented fingerprint updates to use decorator functions.
I also added a new attribute called `_inplace_history` that stores the in-place history of transforms (like cast_, rename_columns, etc.). This history is useful to replay the changes that were done in-place when unpickling a dataset that is memory mapped from a file.
Let me know what you think @thomwolf | This PR is a continuation of #513 , in which many in-place functions were introduced or updated (cast_, flatten_) etc.
However the caching didn't handle these changes. Indeed the caching took into account only the previous cache file name of the table, and not the possible in-place transforms of the table.
To fix that, I added the concept of dataset fingerprint, that is updated after each transform (in place or not), and stored inside the table metadata.
When a dataset is created, an initial fingerprint is computed. If the dataset is memory-mapped, then the fingerprint generator doesn't read the table and only looks at the filename. However if the table is in-memory, then the fingerprint generator reads the content of the table using a batched non-crypto hashing.
I added a utility class to compute hashes of arbitrary python objects in `fingerprint.py` : `Hasher`. The API is close to standard hashing tools (`.update`, `.hexdigest`). It also supports custom hashing functions depending on object types using a registry like pickle. I added a custom hashing function to hash a `pa.Table` in a batched way, and also for `nlp.DatasetInfo` to leverage its json serialization feature.
Note about this PR:
This is a draft PR because #513 needs to be merged first.
The diff that is shown is for branches fingerprint -> indices (and not master, for now) | 61 | text: Fingerprint
This PR is a continuation of #513 , in which many in-place functions were introduced or updated (cast_, flatten_) etc.
However the caching didn't handle these changes. Indeed the caching took into account only the previous cache file name of the table, and not the possible in-place transforms of the table.
To fix that, I added the concept of dataset fingerprint, that is updated after each transform (in place or not), and stored inside the table metadata.
When a dataset is created, an initial fingerprint is computed. If the dataset is memory-mapped, then the fingerprint generator doesn't read the table and only looks at the filename. However if the table is in-memory, then the fingerprint generator reads the content of the table using a batched non-crypto hashing.
I added a utility class to compute hashes of arbitrary python objects in `fingerprint.py` : `Hasher`. The API is close to standard hashing tools (`.update`, `.hexdigest`). It also supports custom hashing functions depending on object types using a registry like pickle. I added a custom hashing function to hash a `pa.Table` in a batched way, and also for `nlp.DatasetInfo` to leverage its json serialization feature.
Note about this PR:
This is a draft PR because #513 needs to be merged first.
The diff that is shown is for branches fingerprint -> indices (and not master, for now)
I changed the way I implemented fingerprint updates to use decorator functions.
I also added a new attribute called `_inplace_history` that stores the in-place history of transforms (like cast_, rename_columns, etc.). This history is useful to replay the changes that were done in-place when unpickling a dataset that is memory mapped from a file.
Let me know what you think @thomwolf |
https://github.com/huggingface/datasets/pull/530 | use ragged tensor by default | Yes I agree. Maybe something that lets specify different format depending on the column ? Especially to better control dtype and shape (and ragged for tf)
Oh and I forgot: this one should also fix the second issue found in #477 for the next release | I think it's better if it's clear whether the returned tensor is ragged or not when the type is set to tensorflow.
Previously it was a tensor (not ragged) if numpy could stack the output (which can change depending on the batch of example you take), which make things difficult to handle, as it may sometimes return a ragged tensor and sometimes not.
Therefore I reverted this behavior to always return a ragged tensor as we used to do. | 45 | text: use ragged tensor by default
I think it's better if it's clear whether the returned tensor is ragged or not when the type is set to tensorflow.
Previously it was a tensor (not ragged) if numpy could stack the output (which can change depending on the batch of example you take), which make things difficult to handle, as it may sometimes return a ragged tensor and sometimes not.
Therefore I reverted this behavior to always return a ragged tensor as we used to do.
Yes I agree. Maybe something that lets specify different format depending on the column ? Especially to better control dtype and shape (and ragged for tf)
Oh and I forgot: this one should also fix the second issue found in #477 for the next release |
https://github.com/huggingface/datasets/pull/529 | Add MLSUM | Could you test to run the test using the changes in #527 and let me know if it fixes the issue ? If so I'll merge it and we'll be good to go :) | Hello (again :) !),
So, I started a new branch because of a [rebase issue](https://github.com/huggingface/nlp/pull/463), sorry for the mess.
However, the command `pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_mlsum` still fails because there is no default language dataset : the script throws an error as a specific config language is necessary.
I think that setting a default language would be a bad workaround for this so I kept it as it is. Putting all the train files across languages together would also be a bad idea because of the size.
Thanks for your help,
Rachel
| 34 | text: Add MLSUM
Hello (again :) !),
So, I started a new branch because of a [rebase issue](https://github.com/huggingface/nlp/pull/463), sorry for the mess.
However, the command `pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_mlsum` still fails because there is no default language dataset : the script throws an error as a specific config language is necessary.
I think that setting a default language would be a bad workaround for this so I kept it as it is. Putting all the train files across languages together would also be a bad idea because of the size.
Thanks for your help,
Rachel
Could you test to run the test using the changes in #527 and let me know if it fixes the issue ? If so I'll merge it and we'll be good to go :) |
https://github.com/huggingface/datasets/pull/528 | fix missing variable names in docs | The problem came from `default: ` that is rendered differently and hides the parameter names. I changed `default: ...` to `defaults to ...` | fix #524 | 23 | text: fix missing variable names in docs
fix #524
The problem came from `default: ` that is rendered differently and hides the parameter names. I changed `default: ...` to `defaults to ...` |
https://github.com/huggingface/datasets/pull/521 | Fix dictionnary (dictionary) typo | Hahah thanks Yonatan. It was not on purpose, we are just not very good at spelling :) | This error happens many times I'm thinking maybe its spelled like this on purpose? | 17 | text: Fix dictionnary (dictionary) typo
This error happens many times I'm thinking maybe its spelled like this on purpose?
Hahah thanks Yonatan. It was not on purpose, we are just not very good at spelling :) |
https://github.com/huggingface/datasets/pull/520 | Transform references for sacrebleu | I think I agree @lhoestq so I pushed a change.
Thanks for your work on the library! | Currently it is impossible to use sacrebleu when len(predictions) != the number of references per prediction (very uncommon), due to a strange format expected by sacrebleu. If one passes in the data to `nlp.metric.compute()` in sacrebleu format, `nlp` throws an error due to mismatching lengths between predictions and references. If one uses a more standard format where predictions and references are lists of the same length, sacrebleu throws an error.
This PR transforms reference data in a more standard format into the [unusual format](https://github.com/mjpost/sacreBLEU#using-sacrebleu-from-python) expected by sacrebleu. | 17 | text: Transform references for sacrebleu
Currently it is impossible to use sacrebleu when len(predictions) != the number of references per prediction (very uncommon), due to a strange format expected by sacrebleu. If one passes in the data to `nlp.metric.compute()` in sacrebleu format, `nlp` throws an error due to mismatching lengths between predictions and references. If one uses a more standard format where predictions and references are lists of the same length, sacrebleu throws an error.
This PR transforms reference data in a more standard format into the [unusual format](https://github.com/mjpost/sacreBLEU#using-sacrebleu-from-python) expected by sacrebleu.
I think I agree @lhoestq so I pushed a change.
Thanks for your work on the library! |
https://github.com/huggingface/datasets/pull/513 | [speedup] Use indices mappings instead of deepcopy for all the samples reordering methods | Ok I fixed `concatenate_datasets` and added tests
Feel free to merge if it's good for you @thomwolf | Use an indices mapping instead of rewriting the dataset for all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`).
Added a `flatten_indices` method which copy the dataset to a new table to remove the indices mapping with tests.
All the samples re-ordering/selection methods should be a lot faster. The downside is that iterating on very large batch of the dataset might be a little slower when we have changed the order of the samples since with in these case we use `pyarrow.Table.take` instead of `pyarrow.Table.slice`. There is no free lunch but the speed of iterating over the dataset is rarely the bottleneck.
*Backward breaking change*: the `cache_file_name` argument in all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`) is now called `indices_cache_file_name` on purpose to make it explicit to the user that this caching file is used for caching the indices mapping and not the dataset itself. | 17 | text: [speedup] Use indices mappings instead of deepcopy for all the samples reordering methods
Use an indices mapping instead of rewriting the dataset for all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`).
Added a `flatten_indices` method which copy the dataset to a new table to remove the indices mapping with tests.
All the samples re-ordering/selection methods should be a lot faster. The downside is that iterating on very large batch of the dataset might be a little slower when we have changed the order of the samples since with in these case we use `pyarrow.Table.take` instead of `pyarrow.Table.slice`. There is no free lunch but the speed of iterating over the dataset is rarely the bottleneck.
*Backward breaking change*: the `cache_file_name` argument in all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`) is now called `indices_cache_file_name` on purpose to make it explicit to the user that this caching file is used for caching the indices mapping and not the dataset itself.
Ok I fixed `concatenate_datasets` and added tests
Feel free to merge if it's good for you @thomwolf |
https://github.com/huggingface/datasets/pull/513 | [speedup] Use indices mappings instead of deepcopy for all the samples reordering methods | Warning from pytorch that we should maybe consider at some point @lhoestq:
```
/__w/nlp/nlp/src/nlp/arrow_dataset.py:648: UserWarning: The given NumPy array is not writeable,
and PyTorch does not support non-writeable tensors. This means you can write to the underlying
(supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to
protect its data or make it writeable before converting it to a tensor. This type of warning will be
suppressed for the rest of this program.
(Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
532
return torch.tensor(x, **format_kwargs)
``` | Use an indices mapping instead of rewriting the dataset for all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`).
Added a `flatten_indices` method which copy the dataset to a new table to remove the indices mapping with tests.
All the samples re-ordering/selection methods should be a lot faster. The downside is that iterating on very large batch of the dataset might be a little slower when we have changed the order of the samples since with in these case we use `pyarrow.Table.take` instead of `pyarrow.Table.slice`. There is no free lunch but the speed of iterating over the dataset is rarely the bottleneck.
*Backward breaking change*: the `cache_file_name` argument in all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`) is now called `indices_cache_file_name` on purpose to make it explicit to the user that this caching file is used for caching the indices mapping and not the dataset itself. | 87 | text: [speedup] Use indices mappings instead of deepcopy for all the samples reordering methods
Use an indices mapping instead of rewriting the dataset for all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`).
Added a `flatten_indices` method which copy the dataset to a new table to remove the indices mapping with tests.
All the samples re-ordering/selection methods should be a lot faster. The downside is that iterating on very large batch of the dataset might be a little slower when we have changed the order of the samples since with in these case we use `pyarrow.Table.take` instead of `pyarrow.Table.slice`. There is no free lunch but the speed of iterating over the dataset is rarely the bottleneck.
*Backward breaking change*: the `cache_file_name` argument in all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`) is now called `indices_cache_file_name` on purpose to make it explicit to the user that this caching file is used for caching the indices mapping and not the dataset itself.
Warning from pytorch that we should maybe consider at some point @lhoestq:
```
/__w/nlp/nlp/src/nlp/arrow_dataset.py:648: UserWarning: The given NumPy array is not writeable,
and PyTorch does not support non-writeable tensors. This means you can write to the underlying
(supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to
protect its data or make it writeable before converting it to a tensor. This type of warning will be
suppressed for the rest of this program.
(Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
532
return torch.tensor(x, **format_kwargs)
``` |
https://github.com/huggingface/datasets/pull/513 | [speedup] Use indices mappings instead of deepcopy for all the samples reordering methods | > Warning from pytorch that we should maybe consider at some point @lhoestq:
>
> ```
> /__w/nlp/nlp/src/nlp/arrow_dataset.py:648: UserWarning: The given NumPy array is not writeable,
> and PyTorch does not support non-writeable tensors. This means you can write to the underlying
> (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to
> protect its data or make it writeable before converting it to a tensor. This type of warning will be
> suppressed for the rest of this program.
> (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
> 532
> return torch.tensor(x, **format_kwargs)
> ```
Not sure why we have that, it's probably linked to zero copy from arrow to numpy | Use an indices mapping instead of rewriting the dataset for all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`).
Added a `flatten_indices` method which copy the dataset to a new table to remove the indices mapping with tests.
All the samples re-ordering/selection methods should be a lot faster. The downside is that iterating on very large batch of the dataset might be a little slower when we have changed the order of the samples since with in these case we use `pyarrow.Table.take` instead of `pyarrow.Table.slice`. There is no free lunch but the speed of iterating over the dataset is rarely the bottleneck.
*Backward breaking change*: the `cache_file_name` argument in all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`) is now called `indices_cache_file_name` on purpose to make it explicit to the user that this caching file is used for caching the indices mapping and not the dataset itself. | 115 | text: [speedup] Use indices mappings instead of deepcopy for all the samples reordering methods
Use an indices mapping instead of rewriting the dataset for all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`).
Added a `flatten_indices` method which copy the dataset to a new table to remove the indices mapping with tests.
All the samples re-ordering/selection methods should be a lot faster. The downside is that iterating on very large batch of the dataset might be a little slower when we have changed the order of the samples since with in these case we use `pyarrow.Table.take` instead of `pyarrow.Table.slice`. There is no free lunch but the speed of iterating over the dataset is rarely the bottleneck.
*Backward breaking change*: the `cache_file_name` argument in all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`) is now called `indices_cache_file_name` on purpose to make it explicit to the user that this caching file is used for caching the indices mapping and not the dataset itself.
> Warning from pytorch that we should maybe consider at some point @lhoestq:
>
> ```
> /__w/nlp/nlp/src/nlp/arrow_dataset.py:648: UserWarning: The given NumPy array is not writeable,
> and PyTorch does not support non-writeable tensors. This means you can write to the underlying
> (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to
> protect its data or make it writeable before converting it to a tensor. This type of warning will be
> suppressed for the rest of this program.
> (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
> 532
> return torch.tensor(x, **format_kwargs)
> ```
Not sure why we have that, it's probably linked to zero copy from arrow to numpy |
https://github.com/huggingface/datasets/pull/505 | tmp_file referenced before assignment | Thanks for reporting the issue ! I'm creating a new PR to fix it and add tests.
(I'm doing a new PR because I know there's some other place where it needs to be fixed) | Just learning about this library - so might've not set up all the flags correctly, but was getting this error about "tmp_file". | 35 | text: tmp_file referenced before assignment
Just learning about this library - so might've not set up all the flags correctly, but was getting this error about "tmp_file".
Thanks for reporting the issue ! I'm creating a new PR to fix it and add tests.
(I'm doing a new PR because I know there's some other place where it needs to be fixed) |
https://github.com/huggingface/datasets/pull/503 | CompGuessWhat?! 0.2.0 | I don't see any significant change in the dataset script (except the version value update), can you check that again please ? | We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset. | 22 | text: CompGuessWhat?! 0.2.0
We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset.
I don't see any significant change in the dataset script (except the version value update), can you check that again please ? |
https://github.com/huggingface/datasets/pull/503 | CompGuessWhat?! 0.2.0 | Hi @aleSuglia , can you check that all the changes you wanted to do are in the dataset script ? | We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset. | 20 | text: CompGuessWhat?! 0.2.0
We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset.
Hi @aleSuglia , can you check that all the changes you wanted to do are in the dataset script ? |
https://github.com/huggingface/datasets/pull/503 | CompGuessWhat?! 0.2.0 | Hey sorry but I'm in the middle of a conference deadline. I'll let you know asap! | We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset. | 16 | text: CompGuessWhat?! 0.2.0
We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset.
Hey sorry but I'm in the middle of a conference deadline. I'll let you know asap! |
https://github.com/huggingface/datasets/pull/503 | CompGuessWhat?! 0.2.0 | I finally managed to find some time to complete this. The only weird thing about this release is that I had to run the tests with the ignore checksum flag. Could it be because the Dropbox link doesn't change but the file does? Sorry didn't have the time to check the code to see what's happening behind the scenes.
| We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset. | 59 | text: CompGuessWhat?! 0.2.0
We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset.
I finally managed to find some time to complete this. The only weird thing about this release is that I had to run the tests with the ignore checksum flag. Could it be because the Dropbox link doesn't change but the file does? Sorry didn't have the time to check the code to see what's happening behind the scenes.
|
https://github.com/huggingface/datasets/pull/503 | CompGuessWhat?! 0.2.0 | Yes if the file changed, then the checksum verification won't pass as it expects to see the checksum of the old file.
The checksum is computed by hashing the complete file.
You can update the checksum by doing
```
nlp-cli test ./datasets/compguesswhat --save_infos --all_configs
``` | We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset. | 45 | text: CompGuessWhat?! 0.2.0
We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset.
Yes if the file changed, then the checksum verification won't pass as it expects to see the checksum of the old file.
The checksum is computed by hashing the complete file.
You can update the checksum by doing
```
nlp-cli test ./datasets/compguesswhat --save_infos --all_configs
``` |
https://github.com/huggingface/datasets/pull/503 | CompGuessWhat?! 0.2.0 | Hi :)
I think what's left to do is
1- rebase from master, since we changed the name of the library
2- update the metadata file of the dataset using the command
```
datasets-cli test ./datasets/compguesswhat --save_infos --all_configs --ignore_verifications
```
This command should update the checksum of the dropbox file | We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset. | 50 | text: CompGuessWhat?! 0.2.0
We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset.
Hi :)
I think what's left to do is
1- rebase from master, since we changed the name of the library
2- update the metadata file of the dataset using the command
```
datasets-cli test ./datasets/compguesswhat --save_infos --all_configs --ignore_verifications
```
This command should update the checksum of the dropbox file |
https://github.com/huggingface/datasets/pull/503 | CompGuessWhat?! 0.2.0 | @lhoestq not sure why the quality check doesn't pass. Unfortunately CircleCI doesn't show the actual error. If I run `black` on my machine it works just fine. Ideas? | We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset. | 28 | text: CompGuessWhat?! 0.2.0
We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset.
@lhoestq not sure why the quality check doesn't pass. Unfortunately CircleCI doesn't show the actual error. If I run `black` on my machine it works just fine. Ideas? |
https://github.com/huggingface/datasets/pull/503 | CompGuessWhat?! 0.2.0 | Your version of `black` might be outdated, or you run using `black` instead of `make style` since it reformatted 100+ files.
Could you try to update black, then `make style` ? | We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset. | 31 | text: CompGuessWhat?! 0.2.0
We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset.
Your version of `black` might be outdated, or you run using `black` instead of `make style` since it reformatted 100+ files.
Could you try to update black, then `make style` ? |
https://github.com/huggingface/datasets/pull/503 | CompGuessWhat?! 0.2.0 | It still doesn't look right in terms of line-length.
Are you running `black` or `make style` ? | We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset. | 17 | text: CompGuessWhat?! 0.2.0
We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset.
It still doesn't look right in terms of line-length.
Are you running `black` or `make style` ? |
https://github.com/huggingface/datasets/pull/503 | CompGuessWhat?! 0.2.0 | I'm running `make style`. This is the output of the command:
```
black --line-length 119 --target-version py36 tests src benchmarks datasets metrics
All done! ✨ 🍰 ✨
250 files left unchanged.
isort tests src benchmarks datasets metrics
``` | We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset. | 38 | text: CompGuessWhat?! 0.2.0
We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset.
I'm running `make style`. This is the output of the command:
```
black --line-length 119 --target-version py36 tests src benchmarks datasets metrics
All done! ✨ 🍰 ✨
250 files left unchanged.
isort tests src benchmarks datasets metrics
``` |
https://github.com/huggingface/datasets/pull/503 | CompGuessWhat?! 0.2.0 | Weird I have the same output without file changes with black `20.8b1` and isort `5.6.4` using `make style` too | We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset. | 19 | text: CompGuessWhat?! 0.2.0
We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset.
Weird I have the same output without file changes with black `20.8b1` and isort `5.6.4` using `make style` too |
https://github.com/huggingface/datasets/pull/503 | CompGuessWhat?! 0.2.0 | I think that's because black doesn't revert the changes you first did with the old version.
Could you open a new PR with only the ComGuessWhat files updated ? Hopefully now that black is up to date it should work directly (and to avoid 100+ files changes) | We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset. | 47 | text: CompGuessWhat?! 0.2.0
We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset.
I think that's because black doesn't revert the changes you first did with the old version.
Could you open a new PR with only the ComGuessWhat files updated ? Hopefully now that black is up to date it should work directly (and to avoid 100+ files changes) |
https://github.com/huggingface/datasets/pull/499 | Narrativeqa (with full text) | I took a look at the dummy data creation for this dataset.
Maybe it didn't work on your side might be because `master.zip` and `narrativeqa_full_text.zip` are supposed to be directories and not acutal zip files in the dummy data folder.
I managed to make it work with this `dummy_data.zip` file:
https://drive.google.com/file/d/1G9ZHAjelazNApbFI0ep2dnSAWklXgGMd/view?usp=sharing | Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset.
Few notes:
- Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine.
- Can't get the dummy data to work. Currently putting stuff at:
```
dummy
|---- 0.0.0
|- dummy_data.zip
|-master.zip
| |- narrativeqa-master
| |- documents.csv
| |- qaps.csv
| |- third_party ......
|
| - narrativeqa_full_text.zip
| | - 001.content
| | - ....
```
Not sure what I'm messing up here (probably something obvious). | 51 | text: Narrativeqa (with full text)
Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset.
Few notes:
- Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine.
- Can't get the dummy data to work. Currently putting stuff at:
```
dummy
|---- 0.0.0
|- dummy_data.zip
|-master.zip
| |- narrativeqa-master
| |- documents.csv
| |- qaps.csv
| |- third_party ......
|
| - narrativeqa_full_text.zip
| | - 001.content
| | - ....
```
Not sure what I'm messing up here (probably something obvious).
I took a look at the dummy data creation for this dataset.
Maybe it didn't work on your side might be because `master.zip` and `narrativeqa_full_text.zip` are supposed to be directories and not acutal zip files in the dummy data folder.
I managed to make it work with this `dummy_data.zip` file:
https://drive.google.com/file/d/1G9ZHAjelazNApbFI0ep2dnSAWklXgGMd/view?usp=sharing |
https://github.com/huggingface/datasets/pull/499 | Narrativeqa (with full text) | @lhoestq Hmmm wasn't that. Must have been something else I missed.
Have committed your working version though now. | Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset.
Few notes:
- Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine.
- Can't get the dummy data to work. Currently putting stuff at:
```
dummy
|---- 0.0.0
|- dummy_data.zip
|-master.zip
| |- narrativeqa-master
| |- documents.csv
| |- qaps.csv
| |- third_party ......
|
| - narrativeqa_full_text.zip
| | - 001.content
| | - ....
```
Not sure what I'm messing up here (probably something obvious). | 18 | text: Narrativeqa (with full text)
Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset.
Few notes:
- Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine.
- Can't get the dummy data to work. Currently putting stuff at:
```
dummy
|---- 0.0.0
|- dummy_data.zip
|-master.zip
| |- narrativeqa-master
| |- documents.csv
| |- qaps.csv
| |- third_party ......
|
| - narrativeqa_full_text.zip
| | - 001.content
| | - ....
```
Not sure what I'm messing up here (probably something obvious).
@lhoestq Hmmm wasn't that. Must have been something else I missed.
Have committed your working version though now. |
https://github.com/huggingface/datasets/pull/499 | Narrativeqa (with full text) | Hi @ghomasHudson, did you get the chance to add the test split and regenerate the dataset_infos.json file ? | Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset.
Few notes:
- Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine.
- Can't get the dummy data to work. Currently putting stuff at:
```
dummy
|---- 0.0.0
|- dummy_data.zip
|-master.zip
| |- narrativeqa-master
| |- documents.csv
| |- qaps.csv
| |- third_party ......
|
| - narrativeqa_full_text.zip
| | - 001.content
| | - ....
```
Not sure what I'm messing up here (probably something obvious). | 18 | text: Narrativeqa (with full text)
Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset.
Few notes:
- Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine.
- Can't get the dummy data to work. Currently putting stuff at:
```
dummy
|---- 0.0.0
|- dummy_data.zip
|-master.zip
| |- narrativeqa-master
| |- documents.csv
| |- qaps.csv
| |- third_party ......
|
| - narrativeqa_full_text.zip
| | - 001.content
| | - ....
```
Not sure what I'm messing up here (probably something obvious).
Hi @ghomasHudson, did you get the chance to add the test split and regenerate the dataset_infos.json file ? |
https://github.com/huggingface/datasets/pull/499 | Narrativeqa (with full text) | > Hi @ghomasHudson, did you get the chance to add the test split and regenerate the dataset_infos.json file ?
Have added the test set code but getting an OverflowError when trying to regen the dataset_infos.json:
---
OverflowError: There was an overflow in the <class 'pyarrow.lib.StructArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
---
| Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset.
Few notes:
- Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine.
- Can't get the dummy data to work. Currently putting stuff at:
```
dummy
|---- 0.0.0
|- dummy_data.zip
|-master.zip
| |- narrativeqa-master
| |- documents.csv
| |- qaps.csv
| |- third_party ......
|
| - narrativeqa_full_text.zip
| | - 001.content
| | - ....
```
Not sure what I'm messing up here (probably something obvious). | 56 | text: Narrativeqa (with full text)
Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset.
Few notes:
- Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine.
- Can't get the dummy data to work. Currently putting stuff at:
```
dummy
|---- 0.0.0
|- dummy_data.zip
|-master.zip
| |- narrativeqa-master
| |- documents.csv
| |- qaps.csv
| |- third_party ......
|
| - narrativeqa_full_text.zip
| | - 001.content
| | - ....
```
Not sure what I'm messing up here (probably something obvious).
> Hi @ghomasHudson, did you get the chance to add the test split and regenerate the dataset_infos.json file ?
Have added the test set code but getting an OverflowError when trying to regen the dataset_infos.json:
---
OverflowError: There was an overflow in the <class 'pyarrow.lib.StructArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
---
|
https://github.com/huggingface/datasets/pull/499 | Narrativeqa (with full text) | It looks like it's an issue with Pyarrow.
By changing the `DEFAULT_MAX_BATCH_SIZE` to 1000 instead of 10 000 in `arrow_writer.py` I was able to run the command.
Basically it seems that is an Arrow StructArray has more than 1-2GB of data, then it shuffles some of its content.
I can't find any issue on Apache Arrow's JIRA about this problem. It will require more investigation.
Maybe we can simply automatically decrease the writer's batch size when this happens. We can just check if the arrow array is more than a certain amount of bytes. | Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset.
Few notes:
- Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine.
- Can't get the dummy data to work. Currently putting stuff at:
```
dummy
|---- 0.0.0
|- dummy_data.zip
|-master.zip
| |- narrativeqa-master
| |- documents.csv
| |- qaps.csv
| |- third_party ......
|
| - narrativeqa_full_text.zip
| | - 001.content
| | - ....
```
Not sure what I'm messing up here (probably something obvious). | 94 | text: Narrativeqa (with full text)
Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset.
Few notes:
- Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine.
- Can't get the dummy data to work. Currently putting stuff at:
```
dummy
|---- 0.0.0
|- dummy_data.zip
|-master.zip
| |- narrativeqa-master
| |- documents.csv
| |- qaps.csv
| |- third_party ......
|
| - narrativeqa_full_text.zip
| | - 001.content
| | - ....
```
Not sure what I'm messing up here (probably something obvious).
It looks like it's an issue with Pyarrow.
By changing the `DEFAULT_MAX_BATCH_SIZE` to 1000 instead of 10 000 in `arrow_writer.py` I was able to run the command.
Basically it seems that is an Arrow StructArray has more than 1-2GB of data, then it shuffles some of its content.
I can't find any issue on Apache Arrow's JIRA about this problem. It will require more investigation.
Maybe we can simply automatically decrease the writer's batch size when this happens. We can just check if the arrow array is more than a certain amount of bytes. |
https://github.com/huggingface/datasets/pull/499 | Narrativeqa (with full text) | @lhoestq I've finally got round to regenerating the `dataset_infos.json` for this and adding all 3 splits. I've done this and updated for the new version of datasets.
The CI tests still aren't passing though (they pass on my machine). `test_load_dataset_narrativeqa` seems to fail but I have no idea how. Would appreciate if you have any ideas - would be great to finally finish this one! | Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset.
Few notes:
- Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine.
- Can't get the dummy data to work. Currently putting stuff at:
```
dummy
|---- 0.0.0
|- dummy_data.zip
|-master.zip
| |- narrativeqa-master
| |- documents.csv
| |- qaps.csv
| |- third_party ......
|
| - narrativeqa_full_text.zip
| | - 001.content
| | - ....
```
Not sure what I'm messing up here (probably something obvious). | 65 | text: Narrativeqa (with full text)
Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset.
Few notes:
- Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine.
- Can't get the dummy data to work. Currently putting stuff at:
```
dummy
|---- 0.0.0
|- dummy_data.zip
|-master.zip
| |- narrativeqa-master
| |- documents.csv
| |- qaps.csv
| |- third_party ......
|
| - narrativeqa_full_text.zip
| | - 001.content
| | - ....
```
Not sure what I'm messing up here (probably something obvious).
@lhoestq I've finally got round to regenerating the `dataset_infos.json` for this and adding all 3 splits. I've done this and updated for the new version of datasets.
The CI tests still aren't passing though (they pass on my machine). `test_load_dataset_narrativeqa` seems to fail but I have no idea how. Would appreciate if you have any ideas - would be great to finally finish this one! |
https://github.com/huggingface/datasets/pull/499 | Narrativeqa (with full text) | The dummy data test fails, apparently it's because no examples are yielded for the dummy data.
Also it looks like the PR now show changes in many other files than the ones for NarrativeQA, could you create another branch and another PR please ?
Feel free to ping me on the new PR so we can fi the dummy data together | Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset.
Few notes:
- Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine.
- Can't get the dummy data to work. Currently putting stuff at:
```
dummy
|---- 0.0.0
|- dummy_data.zip
|-master.zip
| |- narrativeqa-master
| |- documents.csv
| |- qaps.csv
| |- third_party ......
|
| - narrativeqa_full_text.zip
| | - 001.content
| | - ....
```
Not sure what I'm messing up here (probably something obvious). | 61 | text: Narrativeqa (with full text)
Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset.
Few notes:
- Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine.
- Can't get the dummy data to work. Currently putting stuff at:
```
dummy
|---- 0.0.0
|- dummy_data.zip
|-master.zip
| |- narrativeqa-master
| |- documents.csv
| |- qaps.csv
| |- third_party ......
|
| - narrativeqa_full_text.zip
| | - 001.content
| | - ....
```
Not sure what I'm messing up here (probably something obvious).
The dummy data test fails, apparently it's because no examples are yielded for the dummy data.
Also it looks like the PR now show changes in many other files than the ones for NarrativeQA, could you create another branch and another PR please ?
Feel free to ping me on the new PR so we can fi the dummy data together |
https://github.com/huggingface/datasets/pull/494 | Fix numpy stacking | This PR also fixed a bug where numpy arrays were returned instead of pytorch tensors when getting with a clumn as a key. | When getting items using a column name as a key, numpy arrays were not stacked.
I fixed that and added some tests.
There is another issue that still needs to be fixed though: when getting items using a column name as a key, pytorch tensors are not stacked (it outputs a list of tensors). This PR should help with the to fix this issue. | 23 | text: Fix numpy stacking
When getting items using a column name as a key, numpy arrays were not stacked.
I fixed that and added some tests.
There is another issue that still needs to be fixed though: when getting items using a column name as a key, pytorch tensors are not stacked (it outputs a list of tensors). This PR should help with the to fix this issue.
This PR also fixed a bug where numpy arrays were returned instead of pytorch tensors when getting with a clumn as a key. |
https://github.com/huggingface/datasets/pull/487 | Fix elasticsearch result ids returning as strings | It looks like you need to rebase from master to fix the CI. Could you do that please ? | I am using the latest elasticsearch binary and master of nlp. For me elasticsearch searches failed because the resultant "id_" returned for searches are strings, but our library assumes them to be integers. | 19 | text: Fix elasticsearch result ids returning as strings
I am using the latest elasticsearch binary and master of nlp. For me elasticsearch searches failed because the resultant "id_" returned for searches are strings, but our library assumes them to be integers.
It looks like you need to rebase from master to fix the CI. Could you do that please ? |
https://github.com/huggingface/datasets/pull/484 | update mirror for RT dataset | Thanks for adding this mirror link :)
Could you run the following command to update the json file `dataset_infos.json` used to verify the integrity of the downloaded file ?
```
nlp-cli test ./datasets/rotten_tomatoes --save_infos --ignore_verifications
``` | 36 | text: update mirror for RT dataset
Thanks for adding this mirror link :)
Could you run the following command to update the json file `dataset_infos.json` used to verify the integrity of the downloaded file ?
```
nlp-cli test ./datasets/rotten_tomatoes --save_infos --ignore_verifications
``` |
|
https://github.com/huggingface/datasets/pull/481 | Apply utf-8 encoding to all datasets | Not sure why the AWS test is failing - perhaps I made too many concurrent CI builds 😢. Can someone please rerun the CI to check the error is not on my end? | ## Description
This PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function
```python
def apply_encoding_on_file_open(filepath: str):
"""Apply UTF-8 encoding for all instances where a non-binary file is opened."""
with open(filepath, 'r', encoding='utf-8') as input_file:
regexp = re.compile(r"(?!.*\b(?:encoding|rb|w|wb|w+|wb+|ab|ab+)\b)(?<=\s)(open)\((.*)\)")
input_text = input_file.read()
match = regexp.search(input_text)
if match:
output = regexp.sub(lambda m: m.group()[:-1]+', encoding="utf-8")', input_text)
with open(filepath, 'w', encoding='utf-8') as output_file:
output_file.write(output)
```
to perform the replacement.
Note:
1. I excluded all _**binary files**_ from the search since it's possible some objects are opened for which the encoding doesn't make sense. Please correct me if I'm wrong and I'll tweak the regexp accordingly
2. There were two edge cases where the regexp failed (e.g. two `open` instances on a single line), but I decided to just fix these manually in the interest of time.
3. I only applied the replacement to files in `datasets/`. Let me know if this should be extended to other places like `metrics/`
4. I have implemented a unit test that should catch missing encodings in future CI runs
Closes #468 and possibly #347 | 33 | text: Apply utf-8 encoding to all datasets
## Description
This PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function
```python
def apply_encoding_on_file_open(filepath: str):
"""Apply UTF-8 encoding for all instances where a non-binary file is opened."""
with open(filepath, 'r', encoding='utf-8') as input_file:
regexp = re.compile(r"(?!.*\b(?:encoding|rb|w|wb|w+|wb+|ab|ab+)\b)(?<=\s)(open)\((.*)\)")
input_text = input_file.read()
match = regexp.search(input_text)
if match:
output = regexp.sub(lambda m: m.group()[:-1]+', encoding="utf-8")', input_text)
with open(filepath, 'w', encoding='utf-8') as output_file:
output_file.write(output)
```
to perform the replacement.
Note:
1. I excluded all _**binary files**_ from the search since it's possible some objects are opened for which the encoding doesn't make sense. Please correct me if I'm wrong and I'll tweak the regexp accordingly
2. There were two edge cases where the regexp failed (e.g. two `open` instances on a single line), but I decided to just fix these manually in the interest of time.
3. I only applied the replacement to files in `datasets/`. Let me know if this should be extended to other places like `metrics/`
4. I have implemented a unit test that should catch missing encodings in future CI runs
Closes #468 and possibly #347
Not sure why the AWS test is failing - perhaps I made too many concurrent CI builds 😢. Can someone please rerun the CI to check the error is not on my end? |
https://github.com/huggingface/datasets/pull/481 | Apply utf-8 encoding to all datasets | I pushed an improved docstring and the unit tests now pass, which suggests the previous failure on AWS was simply a timeout error.
For some reason the docs are now failing to build, but does not seem related to my changes:
```
Warning, treated as error:
/home/circleci/nlp/src/nlp/dataset_dict.py:docstring of nlp.DatasetDict.filter:27:Inline interpreted text or phrase reference start-string without end-string.
make: *** [Makefile:20: html] Error 2
```
Any ideas what's going wrong? | ## Description
This PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function
```python
def apply_encoding_on_file_open(filepath: str):
"""Apply UTF-8 encoding for all instances where a non-binary file is opened."""
with open(filepath, 'r', encoding='utf-8') as input_file:
regexp = re.compile(r"(?!.*\b(?:encoding|rb|w|wb|w+|wb+|ab|ab+)\b)(?<=\s)(open)\((.*)\)")
input_text = input_file.read()
match = regexp.search(input_text)
if match:
output = regexp.sub(lambda m: m.group()[:-1]+', encoding="utf-8")', input_text)
with open(filepath, 'w', encoding='utf-8') as output_file:
output_file.write(output)
```
to perform the replacement.
Note:
1. I excluded all _**binary files**_ from the search since it's possible some objects are opened for which the encoding doesn't make sense. Please correct me if I'm wrong and I'll tweak the regexp accordingly
2. There were two edge cases where the regexp failed (e.g. two `open` instances on a single line), but I decided to just fix these manually in the interest of time.
3. I only applied the replacement to files in `datasets/`. Let me know if this should be extended to other places like `metrics/`
4. I have implemented a unit test that should catch missing encodings in future CI runs
Closes #468 and possibly #347 | 69 | text: Apply utf-8 encoding to all datasets
## Description
This PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function
```python
def apply_encoding_on_file_open(filepath: str):
"""Apply UTF-8 encoding for all instances where a non-binary file is opened."""
with open(filepath, 'r', encoding='utf-8') as input_file:
regexp = re.compile(r"(?!.*\b(?:encoding|rb|w|wb|w+|wb+|ab|ab+)\b)(?<=\s)(open)\((.*)\)")
input_text = input_file.read()
match = regexp.search(input_text)
if match:
output = regexp.sub(lambda m: m.group()[:-1]+', encoding="utf-8")', input_text)
with open(filepath, 'w', encoding='utf-8') as output_file:
output_file.write(output)
```
to perform the replacement.
Note:
1. I excluded all _**binary files**_ from the search since it's possible some objects are opened for which the encoding doesn't make sense. Please correct me if I'm wrong and I'll tweak the regexp accordingly
2. There were two edge cases where the regexp failed (e.g. two `open` instances on a single line), but I decided to just fix these manually in the interest of time.
3. I only applied the replacement to files in `datasets/`. Let me know if this should be extended to other places like `metrics/`
4. I have implemented a unit test that should catch missing encodings in future CI runs
Closes #468 and possibly #347
I pushed an improved docstring and the unit tests now pass, which suggests the previous failure on AWS was simply a timeout error.
For some reason the docs are now failing to build, but does not seem related to my changes:
```
Warning, treated as error:
/home/circleci/nlp/src/nlp/dataset_dict.py:docstring of nlp.DatasetDict.filter:27:Inline interpreted text or phrase reference start-string without end-string.
make: *** [Makefile:20: html] Error 2
```
Any ideas what's going wrong? |
https://github.com/huggingface/datasets/pull/481 | Apply utf-8 encoding to all datasets | The build_doc fail has been fixed on master.
It was due to the latest update of sphinx that has some issues, so I pinned the previous version for now. | ## Description
This PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function
```python
def apply_encoding_on_file_open(filepath: str):
"""Apply UTF-8 encoding for all instances where a non-binary file is opened."""
with open(filepath, 'r', encoding='utf-8') as input_file:
regexp = re.compile(r"(?!.*\b(?:encoding|rb|w|wb|w+|wb+|ab|ab+)\b)(?<=\s)(open)\((.*)\)")
input_text = input_file.read()
match = regexp.search(input_text)
if match:
output = regexp.sub(lambda m: m.group()[:-1]+', encoding="utf-8")', input_text)
with open(filepath, 'w', encoding='utf-8') as output_file:
output_file.write(output)
```
to perform the replacement.
Note:
1. I excluded all _**binary files**_ from the search since it's possible some objects are opened for which the encoding doesn't make sense. Please correct me if I'm wrong and I'll tweak the regexp accordingly
2. There were two edge cases where the regexp failed (e.g. two `open` instances on a single line), but I decided to just fix these manually in the interest of time.
3. I only applied the replacement to files in `datasets/`. Let me know if this should be extended to other places like `metrics/`
4. I have implemented a unit test that should catch missing encodings in future CI runs
Closes #468 and possibly #347 | 29 | text: Apply utf-8 encoding to all datasets
## Description
This PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function
```python
def apply_encoding_on_file_open(filepath: str):
"""Apply UTF-8 encoding for all instances where a non-binary file is opened."""
with open(filepath, 'r', encoding='utf-8') as input_file:
regexp = re.compile(r"(?!.*\b(?:encoding|rb|w|wb|w+|wb+|ab|ab+)\b)(?<=\s)(open)\((.*)\)")
input_text = input_file.read()
match = regexp.search(input_text)
if match:
output = regexp.sub(lambda m: m.group()[:-1]+', encoding="utf-8")', input_text)
with open(filepath, 'w', encoding='utf-8') as output_file:
output_file.write(output)
```
to perform the replacement.
Note:
1. I excluded all _**binary files**_ from the search since it's possible some objects are opened for which the encoding doesn't make sense. Please correct me if I'm wrong and I'll tweak the regexp accordingly
2. There were two edge cases where the regexp failed (e.g. two `open` instances on a single line), but I decided to just fix these manually in the interest of time.
3. I only applied the replacement to files in `datasets/`. Let me know if this should be extended to other places like `metrics/`
4. I have implemented a unit test that should catch missing encodings in future CI runs
Closes #468 and possibly #347
The build_doc fail has been fixed on master.
It was due to the latest update of sphinx that has some issues, so I pinned the previous version for now. |
https://github.com/huggingface/datasets/pull/481 | Apply utf-8 encoding to all datasets | I noticed that you also changed the Apache Beam `open` to also use utf-8. However it doesn't have an `encoding` parameter.
Therefore you should ignore lines like
```python
beam.io.filesystems.FileSystems.open(filepath)
```
I guess you could add a rule to your regex to only include the `open` call that have a space right before it. | ## Description
This PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function
```python
def apply_encoding_on_file_open(filepath: str):
"""Apply UTF-8 encoding for all instances where a non-binary file is opened."""
with open(filepath, 'r', encoding='utf-8') as input_file:
regexp = re.compile(r"(?!.*\b(?:encoding|rb|w|wb|w+|wb+|ab|ab+)\b)(?<=\s)(open)\((.*)\)")
input_text = input_file.read()
match = regexp.search(input_text)
if match:
output = regexp.sub(lambda m: m.group()[:-1]+', encoding="utf-8")', input_text)
with open(filepath, 'w', encoding='utf-8') as output_file:
output_file.write(output)
```
to perform the replacement.
Note:
1. I excluded all _**binary files**_ from the search since it's possible some objects are opened for which the encoding doesn't make sense. Please correct me if I'm wrong and I'll tweak the regexp accordingly
2. There were two edge cases where the regexp failed (e.g. two `open` instances on a single line), but I decided to just fix these manually in the interest of time.
3. I only applied the replacement to files in `datasets/`. Let me know if this should be extended to other places like `metrics/`
4. I have implemented a unit test that should catch missing encodings in future CI runs
Closes #468 and possibly #347 | 53 | text: Apply utf-8 encoding to all datasets
## Description
This PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function
```python
def apply_encoding_on_file_open(filepath: str):
"""Apply UTF-8 encoding for all instances where a non-binary file is opened."""
with open(filepath, 'r', encoding='utf-8') as input_file:
regexp = re.compile(r"(?!.*\b(?:encoding|rb|w|wb|w+|wb+|ab|ab+)\b)(?<=\s)(open)\((.*)\)")
input_text = input_file.read()
match = regexp.search(input_text)
if match:
output = regexp.sub(lambda m: m.group()[:-1]+', encoding="utf-8")', input_text)
with open(filepath, 'w', encoding='utf-8') as output_file:
output_file.write(output)
```
to perform the replacement.
Note:
1. I excluded all _**binary files**_ from the search since it's possible some objects are opened for which the encoding doesn't make sense. Please correct me if I'm wrong and I'll tweak the regexp accordingly
2. There were two edge cases where the regexp failed (e.g. two `open` instances on a single line), but I decided to just fix these manually in the interest of time.
3. I only applied the replacement to files in `datasets/`. Let me know if this should be extended to other places like `metrics/`
4. I have implemented a unit test that should catch missing encodings in future CI runs
Closes #468 and possibly #347
I noticed that you also changed the Apache Beam `open` to also use utf-8. However it doesn't have an `encoding` parameter.
Therefore you should ignore lines like
```python
beam.io.filesystems.FileSystems.open(filepath)
```
I guess you could add a rule to your regex to only include the `open` call that have a space right before it. |
https://github.com/huggingface/datasets/pull/481 | Apply utf-8 encoding to all datasets | Good catch @lhoestq! Your suggestion to match on `open(...)` with a whitespace was a great idea - it allowed me to simplify the regexp considerably 😄.
I fixed the Apache Beam false positives and also caught a few problems in `json.load()`, e.g.
```python
relation_name_map = json.load(open(rel_info), encoding='utf-8')
```
I've tested that the new regexp doesn't reintroduce these false positives, so I think the PR is ready for another review. | ## Description
This PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function
```python
def apply_encoding_on_file_open(filepath: str):
"""Apply UTF-8 encoding for all instances where a non-binary file is opened."""
with open(filepath, 'r', encoding='utf-8') as input_file:
regexp = re.compile(r"(?!.*\b(?:encoding|rb|w|wb|w+|wb+|ab|ab+)\b)(?<=\s)(open)\((.*)\)")
input_text = input_file.read()
match = regexp.search(input_text)
if match:
output = regexp.sub(lambda m: m.group()[:-1]+', encoding="utf-8")', input_text)
with open(filepath, 'w', encoding='utf-8') as output_file:
output_file.write(output)
```
to perform the replacement.
Note:
1. I excluded all _**binary files**_ from the search since it's possible some objects are opened for which the encoding doesn't make sense. Please correct me if I'm wrong and I'll tweak the regexp accordingly
2. There were two edge cases where the regexp failed (e.g. two `open` instances on a single line), but I decided to just fix these manually in the interest of time.
3. I only applied the replacement to files in `datasets/`. Let me know if this should be extended to other places like `metrics/`
4. I have implemented a unit test that should catch missing encodings in future CI runs
Closes #468 and possibly #347 | 69 | text: Apply utf-8 encoding to all datasets
## Description
This PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function
```python
def apply_encoding_on_file_open(filepath: str):
"""Apply UTF-8 encoding for all instances where a non-binary file is opened."""
with open(filepath, 'r', encoding='utf-8') as input_file:
regexp = re.compile(r"(?!.*\b(?:encoding|rb|w|wb|w+|wb+|ab|ab+)\b)(?<=\s)(open)\((.*)\)")
input_text = input_file.read()
match = regexp.search(input_text)
if match:
output = regexp.sub(lambda m: m.group()[:-1]+', encoding="utf-8")', input_text)
with open(filepath, 'w', encoding='utf-8') as output_file:
output_file.write(output)
```
to perform the replacement.
Note:
1. I excluded all _**binary files**_ from the search since it's possible some objects are opened for which the encoding doesn't make sense. Please correct me if I'm wrong and I'll tweak the regexp accordingly
2. There were two edge cases where the regexp failed (e.g. two `open` instances on a single line), but I decided to just fix these manually in the interest of time.
3. I only applied the replacement to files in `datasets/`. Let me know if this should be extended to other places like `metrics/`
4. I have implemented a unit test that should catch missing encodings in future CI runs
Closes #468 and possibly #347
Good catch @lhoestq! Your suggestion to match on `open(...)` with a whitespace was a great idea - it allowed me to simplify the regexp considerably 😄.
I fixed the Apache Beam false positives and also caught a few problems in `json.load()`, e.g.
```python
relation_name_map = json.load(open(rel_info), encoding='utf-8')
```
I've tested that the new regexp doesn't reintroduce these false positives, so I think the PR is ready for another review. |
https://github.com/huggingface/datasets/pull/480 | Column indexing hotfix | Looks good to me as well but we'll want to add a test indeed.
You can add one if you have time @TevenLeScao.
Otherwise, we'll do it when we are back with Quentin. | As observed for example in #469 , currently `__getitem__` does not convert the data to the dataset format when indexing by column. This is a hotfix that imitates functional 0.3.0. code. In the future it'd probably be nice to have a test there. | 33 | text: Column indexing hotfix
As observed for example in #469 , currently `__getitem__` does not convert the data to the dataset format when indexing by column. This is a hotfix that imitates functional 0.3.0. code. In the future it'd probably be nice to have a test there.
Looks good to me as well but we'll want to add a test indeed.
You can add one if you have time @TevenLeScao.
Otherwise, we'll do it when we are back with Quentin. |
https://github.com/huggingface/datasets/pull/479 | add METEOR metric | Really nice !
Thanks for adding this one.
I noticed that there are some '-' that are left in the description in the middle of some workds. It migh come from copy-pasting the pdf paper. ex: `im-provement`. Could you fix that please ? | Added the METEOR metric. Can be used like this:
```python
import nlp
meteor = nlp.load_metric('metrics/meteor')
meteor.compute(["some string", "some string"], ["some string", "some similar string"])
# {'meteor': 0.6411637931034483}
meteor.add("some string", "some string")
meteor.add('some string", "some similar string")
meteor.compute()
# {'meteor': 0.6411637931034483}
```
Uses [NLTK's implementation](https://www.nltk.org/api/nltk.translate.html#module-nltk.translate.meteor_score), [(source)](https://github.com/nltk/nltk/blob/develop/nltk/translate/meteor_score.py) | 43 | text: add METEOR metric
Added the METEOR metric. Can be used like this:
```python
import nlp
meteor = nlp.load_metric('metrics/meteor')
meteor.compute(["some string", "some string"], ["some string", "some similar string"])
# {'meteor': 0.6411637931034483}
meteor.add("some string", "some string")
meteor.add('some string", "some similar string")
meteor.compute()
# {'meteor': 0.6411637931034483}
```
Uses [NLTK's implementation](https://www.nltk.org/api/nltk.translate.html#module-nltk.translate.meteor_score), [(source)](https://github.com/nltk/nltk/blob/develop/nltk/translate/meteor_score.py)
Really nice !
Thanks for adding this one.
I noticed that there are some '-' that are left in the description in the middle of some workds. It migh come from copy-pasting the pdf paper. ex: `im-provement`. Could you fix that please ? |
https://github.com/huggingface/datasets/pull/479 | add METEOR metric | @lhoestq
Linebreaks have been removed! Note that there are still a few compound words that are hyphenated intentionally. | Added the METEOR metric. Can be used like this:
```python
import nlp
meteor = nlp.load_metric('metrics/meteor')
meteor.compute(["some string", "some string"], ["some string", "some similar string"])
# {'meteor': 0.6411637931034483}
meteor.add("some string", "some string")
meteor.add('some string", "some similar string")
meteor.compute()
# {'meteor': 0.6411637931034483}
```
Uses [NLTK's implementation](https://www.nltk.org/api/nltk.translate.html#module-nltk.translate.meteor_score), [(source)](https://github.com/nltk/nltk/blob/develop/nltk/translate/meteor_score.py) | 18 | text: add METEOR metric
Added the METEOR metric. Can be used like this:
```python
import nlp
meteor = nlp.load_metric('metrics/meteor')
meteor.compute(["some string", "some string"], ["some string", "some similar string"])
# {'meteor': 0.6411637931034483}
meteor.add("some string", "some string")
meteor.add('some string", "some similar string")
meteor.compute()
# {'meteor': 0.6411637931034483}
```
Uses [NLTK's implementation](https://www.nltk.org/api/nltk.translate.html#module-nltk.translate.meteor_score), [(source)](https://github.com/nltk/nltk/blob/develop/nltk/translate/meteor_score.py)
@lhoestq
Linebreaks have been removed! Note that there are still a few compound words that are hyphenated intentionally. |
https://github.com/huggingface/datasets/pull/479 | add METEOR metric | Yes I made the mistake of simply merging master into this branch. A rebase seems to be neater :) Although all the commits ended up being added twice. I assume you just squash them into a single one on merge anyways? | Added the METEOR metric. Can be used like this:
```python
import nlp
meteor = nlp.load_metric('metrics/meteor')
meteor.compute(["some string", "some string"], ["some string", "some similar string"])
# {'meteor': 0.6411637931034483}
meteor.add("some string", "some string")
meteor.add('some string", "some similar string")
meteor.compute()
# {'meteor': 0.6411637931034483}
```
Uses [NLTK's implementation](https://www.nltk.org/api/nltk.translate.html#module-nltk.translate.meteor_score), [(source)](https://github.com/nltk/nltk/blob/develop/nltk/translate/meteor_score.py) | 41 | text: add METEOR metric
Added the METEOR metric. Can be used like this:
```python
import nlp
meteor = nlp.load_metric('metrics/meteor')
meteor.compute(["some string", "some string"], ["some string", "some similar string"])
# {'meteor': 0.6411637931034483}
meteor.add("some string", "some string")
meteor.add('some string", "some similar string")
meteor.compute()
# {'meteor': 0.6411637931034483}
```
Uses [NLTK's implementation](https://www.nltk.org/api/nltk.translate.html#module-nltk.translate.meteor_score), [(source)](https://github.com/nltk/nltk/blob/develop/nltk/translate/meteor_score.py)
Yes I made the mistake of simply merging master into this branch. A rebase seems to be neater :) Although all the commits ended up being added twice. I assume you just squash them into a single one on merge anyways? |
https://github.com/huggingface/datasets/pull/476 | CheckList | > Also, a little out of my depth there, but would there be a way to have the default pip install checklist command not require mysql and mariadb to be installed? Feels like that might be a source of confusion for users.
I removed the pattern dependency, mysql is not a requirement anymore. I'm not sure where mariadb is coming from. | Sorry for the large pull request.
- Added checklists as datasets. I can't run `test_load_real_dataset` (see #474), but I can load the datasets successfully as shown in the example notebook
- Added a checklist wrapper | 61 | text: CheckList
Sorry for the large pull request.
- Added checklists as datasets. I can't run `test_load_real_dataset` (see #474), but I can load the datasets successfully as shown in the example notebook
- Added a checklist wrapper
> Also, a little out of my depth there, but would there be a way to have the default pip install checklist command not require mysql and mariadb to be installed? Feels like that might be a source of confusion for users.
I removed the pattern dependency, mysql is not a requirement anymore. I'm not sure where mariadb is coming from. |
https://github.com/huggingface/datasets/pull/472 | add crd3 dataset | This PR was already approved by @lhoestq in #456 . This one just make style to remove some typos | opening new PR for CRD3 dataset (ACL2020) to fix the circle CI problems | 19 | text: add crd3 dataset
opening new PR for CRD3 dataset (ACL2020) to fix the circle CI problems
This PR was already approved by @lhoestq in #456 . This one just make style to remove some typos |
https://github.com/huggingface/datasets/pull/470 | Adding IWSLT 2017 dataset. | Ok I tried to add the dummy dataset (I actually modified the dummy_data command to generate them for me because it was too painful to do that manually).
The dummy_data test seems to work:
```bash
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_iwslt2017
```
However the test on the full data fails, because the `**config_kwargs` don't include `pair, multilingual`.
I could add a default parameter for the Config (but that feels broken, how can one config be the "default" ?). If I do I still have errors, saying that something within the downloader is a directory so I'm not sure where that comes from.
I can share my auto_zip dummy data code if you want (I tried to keep it clean). [Edit: it's [here](https://github.com/Narsil/nlp/tree/auto_zip)].
The way it works is that it just keeps X line from the beginning of the original files, and Y lines at the end. It's good enough for my usage, but I guess it could work for most data files out there (as long as they're real text and not binary format) | Created a [IWSLT 2017](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) dataset script for the *multilingual data*.
```
Bilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English
Multilingual data: German, English, Italian, Dutch, Romanian. (Any pair)
```
I'm unsure how to handle bilingual vs multilingual. Given `nlp` architecture a Config option seems to be the way to go, however, it might be a bit confusing to have different language pairs with different option. Using just language pairs is not viable as English to German exists in both.
Any opinion on how that should be done ?
EDIT: I decided to just omit de-en from multilingual as it's only a subset of the bilingual one. That way only language pairs exist.
EDIT : Could be interesting for #438 | 171 | text: Adding IWSLT 2017 dataset.
Created a [IWSLT 2017](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) dataset script for the *multilingual data*.
```
Bilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English
Multilingual data: German, English, Italian, Dutch, Romanian. (Any pair)
```
I'm unsure how to handle bilingual vs multilingual. Given `nlp` architecture a Config option seems to be the way to go, however, it might be a bit confusing to have different language pairs with different option. Using just language pairs is not viable as English to German exists in both.
Any opinion on how that should be done ?
EDIT: I decided to just omit de-en from multilingual as it's only a subset of the bilingual one. That way only language pairs exist.
EDIT : Could be interesting for #438
Ok I tried to add the dummy dataset (I actually modified the dummy_data command to generate them for me because it was too painful to do that manually).
The dummy_data test seems to work:
```bash
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_iwslt2017
```
However the test on the full data fails, because the `**config_kwargs` don't include `pair, multilingual`.
I could add a default parameter for the Config (but that feels broken, how can one config be the "default" ?). If I do I still have errors, saying that something within the downloader is a directory so I'm not sure where that comes from.
I can share my auto_zip dummy data code if you want (I tried to keep it clean). [Edit: it's [here](https://github.com/Narsil/nlp/tree/auto_zip)].
The way it works is that it just keeps X line from the beginning of the original files, and Y lines at the end. It's good enough for my usage, but I guess it could work for most data files out there (as long as they're real text and not binary format) |
https://github.com/huggingface/datasets/pull/470 | Adding IWSLT 2017 dataset. | The slow test doesn't support dataset that require config parameters that don't have default values.
To improve that we can replace it by two tests:
- one test that loads the default config (it can simply be the first config of the config lists for example)
- one tests that iterate over all configs and load them all one by one
By using the configs inside the builder config lists, there is no need to instantiate new configs, so the missing parameter error doesn't happen.
Does that sound good to you ? | Created a [IWSLT 2017](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) dataset script for the *multilingual data*.
```
Bilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English
Multilingual data: German, English, Italian, Dutch, Romanian. (Any pair)
```
I'm unsure how to handle bilingual vs multilingual. Given `nlp` architecture a Config option seems to be the way to go, however, it might be a bit confusing to have different language pairs with different option. Using just language pairs is not viable as English to German exists in both.
Any opinion on how that should be done ?
EDIT: I decided to just omit de-en from multilingual as it's only a subset of the bilingual one. That way only language pairs exist.
EDIT : Could be interesting for #438 | 92 | text: Adding IWSLT 2017 dataset.
Created a [IWSLT 2017](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) dataset script for the *multilingual data*.
```
Bilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English
Multilingual data: German, English, Italian, Dutch, Romanian. (Any pair)
```
I'm unsure how to handle bilingual vs multilingual. Given `nlp` architecture a Config option seems to be the way to go, however, it might be a bit confusing to have different language pairs with different option. Using just language pairs is not viable as English to German exists in both.
Any opinion on how that should be done ?
EDIT: I decided to just omit de-en from multilingual as it's only a subset of the bilingual one. That way only language pairs exist.
EDIT : Could be interesting for #438
The slow test doesn't support dataset that require config parameters that don't have default values.
To improve that we can replace it by two tests:
- one test that loads the default config (it can simply be the first config of the config lists for example)
- one tests that iterate over all configs and load them all one by one
By using the configs inside the builder config lists, there is no need to instantiate new configs, so the missing parameter error doesn't happen.
Does that sound good to you ? |
https://github.com/huggingface/datasets/pull/470 | Adding IWSLT 2017 dataset. | Seems fair.
However I'm unsure what I should do ?
Should I wait for #527 to pass and rebase and the command will be the same ?
Should I update something ? | Created a [IWSLT 2017](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) dataset script for the *multilingual data*.
```
Bilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English
Multilingual data: German, English, Italian, Dutch, Romanian. (Any pair)
```
I'm unsure how to handle bilingual vs multilingual. Given `nlp` architecture a Config option seems to be the way to go, however, it might be a bit confusing to have different language pairs with different option. Using just language pairs is not viable as English to German exists in both.
Any opinion on how that should be done ?
EDIT: I decided to just omit de-en from multilingual as it's only a subset of the bilingual one. That way only language pairs exist.
EDIT : Could be interesting for #438 | 32 | text: Adding IWSLT 2017 dataset.
Created a [IWSLT 2017](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) dataset script for the *multilingual data*.
```
Bilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English
Multilingual data: German, English, Italian, Dutch, Romanian. (Any pair)
```
I'm unsure how to handle bilingual vs multilingual. Given `nlp` architecture a Config option seems to be the way to go, however, it might be a bit confusing to have different language pairs with different option. Using just language pairs is not viable as English to German exists in both.
Any opinion on how that should be done ?
EDIT: I decided to just omit de-en from multilingual as it's only a subset of the bilingual one. That way only language pairs exist.
EDIT : Could be interesting for #438
Seems fair.
However I'm unsure what I should do ?
Should I wait for #527 to pass and rebase and the command will be the same ?
Should I update something ? |
https://github.com/huggingface/datasets/pull/470 | Adding IWSLT 2017 dataset. | I think everything is fine on your side. Thanks for adding this dataset :)
I think it's better to wait for the slow test to be updated if you don't mind.
| Created a [IWSLT 2017](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) dataset script for the *multilingual data*.
```
Bilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English
Multilingual data: German, English, Italian, Dutch, Romanian. (Any pair)
```
I'm unsure how to handle bilingual vs multilingual. Given `nlp` architecture a Config option seems to be the way to go, however, it might be a bit confusing to have different language pairs with different option. Using just language pairs is not viable as English to German exists in both.
Any opinion on how that should be done ?
EDIT: I decided to just omit de-en from multilingual as it's only a subset of the bilingual one. That way only language pairs exist.
EDIT : Could be interesting for #438 | 31 | text: Adding IWSLT 2017 dataset.
Created a [IWSLT 2017](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) dataset script for the *multilingual data*.
```
Bilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English
Multilingual data: German, English, Italian, Dutch, Romanian. (Any pair)
```
I'm unsure how to handle bilingual vs multilingual. Given `nlp` architecture a Config option seems to be the way to go, however, it might be a bit confusing to have different language pairs with different option. Using just language pairs is not viable as English to German exists in both.
Any opinion on how that should be done ?
EDIT: I decided to just omit de-en from multilingual as it's only a subset of the bilingual one. That way only language pairs exist.
EDIT : Could be interesting for #438
I think everything is fine on your side. Thanks for adding this dataset :)
I think it's better to wait for the slow test to be updated if you don't mind.
|
https://github.com/huggingface/datasets/pull/470 | Adding IWSLT 2017 dataset. | Thanks for fixing the isort/black changes :)
Feel free to merge if it's good for you @Narsil | Created a [IWSLT 2017](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) dataset script for the *multilingual data*.
```
Bilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English
Multilingual data: German, English, Italian, Dutch, Romanian. (Any pair)
```
I'm unsure how to handle bilingual vs multilingual. Given `nlp` architecture a Config option seems to be the way to go, however, it might be a bit confusing to have different language pairs with different option. Using just language pairs is not viable as English to German exists in both.
Any opinion on how that should be done ?
EDIT: I decided to just omit de-en from multilingual as it's only a subset of the bilingual one. That way only language pairs exist.
EDIT : Could be interesting for #438 | 17 | text: Adding IWSLT 2017 dataset.
Created a [IWSLT 2017](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) dataset script for the *multilingual data*.
```
Bilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English
Multilingual data: German, English, Italian, Dutch, Romanian. (Any pair)
```
I'm unsure how to handle bilingual vs multilingual. Given `nlp` architecture a Config option seems to be the way to go, however, it might be a bit confusing to have different language pairs with different option. Using just language pairs is not viable as English to German exists in both.
Any opinion on how that should be done ?
EDIT: I decided to just omit de-en from multilingual as it's only a subset of the bilingual one. That way only language pairs exist.
EDIT : Could be interesting for #438
Thanks for fixing the isort/black changes :)
Feel free to merge if it's good for you @Narsil |
https://github.com/huggingface/datasets/pull/466 | [METRICS] Various improvements on metrics | The cast function is now called inside `features.encode_example`.
I also added `encode_batch` that was missing.
Moreover I used the cast function in `Dataset.map` to support torch/tensorflow tensors or numpy arrays inputs.
There are tests for tensors inputs in metrics and in .map | - Disallow the use of positional arguments to avoid `predictions` vs `references` mistakes
- Allow to directly feed numpy/pytorch/tensorflow/pandas objects in metrics | 42 | text: [METRICS] Various improvements on metrics
- Disallow the use of positional arguments to avoid `predictions` vs `references` mistakes
- Allow to directly feed numpy/pytorch/tensorflow/pandas objects in metrics
The cast function is now called inside `features.encode_example`.
I also added `encode_batch` that was missing.
Moreover I used the cast function in `Dataset.map` to support torch/tensorflow tensors or numpy arrays inputs.
There are tests for tensors inputs in metrics and in .map |
https://github.com/huggingface/datasets/pull/465 | Keep features after transform | One note on features inference:
if an arrow type is `struct of items` where each item is a `list`, then we return a `dict` in which each item is a `Sequence`.
It means that we don't use the Sequence <-> dict swap when we infer features.
It's fine because the swap is generally used in dataset scripts, in which features are defined (inferred features are discarded) | When applying a transform like `map`, some features were lost (and inferred features were used).
It was the case for ClassLabel, Translation, etc.
To fix that, I did some modifications in the `ArrowWriter`:
- added the `update_features` parameter. When it's `True`, then the features specified by the user (if any) can be updated with inferred features if their type don't match. `map` transform sets `update_features=True` when writing to cache file or buffer. Features won't change by default in `map`.
- added the `with_metadata` parameter. If `True`, the `features` (after update) will be written inside the metadata of the schema in this format:
```
{
"huggingface": {"features" : <serialized Features exactly like dataset_info.json>}
}
```
Then, once a dataset is instantiated without info/features, these metadata are used to set the features of the dataset. | 66 | text: Keep features after transform
When applying a transform like `map`, some features were lost (and inferred features were used).
It was the case for ClassLabel, Translation, etc.
To fix that, I did some modifications in the `ArrowWriter`:
- added the `update_features` parameter. When it's `True`, then the features specified by the user (if any) can be updated with inferred features if their type don't match. `map` transform sets `update_features=True` when writing to cache file or buffer. Features won't change by default in `map`.
- added the `with_metadata` parameter. If `True`, the `features` (after update) will be written inside the metadata of the schema in this format:
```
{
"huggingface": {"features" : <serialized Features exactly like dataset_info.json>}
}
```
Then, once a dataset is instantiated without info/features, these metadata are used to set the features of the dataset.
One note on features inference:
if an arrow type is `struct of items` where each item is a `list`, then we return a `dict` in which each item is a `Sequence`.
It means that we don't use the Sequence <-> dict swap when we infer features.
It's fine because the swap is generally used in dataset scripts, in which features are defined (inferred features are discarded) |
https://github.com/huggingface/datasets/pull/463 | Add dataset/mlsum | I think the problem is related to `wiki_dpr` dataset which is making the circle CI failed as you can see:
```
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr
FAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/dummy_psgs_w100_no_embeddings
FAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/dummy_psgs_w100_with_nq_embeddings
FAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/psgs_w100_no_embeddings
FAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/psgs_w100_with_nq_embeddings
```
I'm facing the same issues with my last commits, I tried to rebase from master but it still not working. Maybe @lhoestq can help with. | New pull request that should correct the previous errors.
The load_real_data stills fails because it is looking for a default dataset URL that does not exists, this does not happen when loading the dataset with load_dataset | 57 | text: Add dataset/mlsum
New pull request that should correct the previous errors.
The load_real_data stills fails because it is looking for a default dataset URL that does not exists, this does not happen when loading the dataset with load_dataset
I think the problem is related to `wiki_dpr` dataset which is making the circle CI failed as you can see:
```
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr
FAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/dummy_psgs_w100_no_embeddings
FAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/dummy_psgs_w100_with_nq_embeddings
FAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/psgs_w100_no_embeddings
FAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/psgs_w100_with_nq_embeddings
```
I'm facing the same issues with my last commits, I tried to rebase from master but it still not working. Maybe @lhoestq can help with. |
https://github.com/huggingface/datasets/pull/463 | Add dataset/mlsum | Hello, I am confused about the next steps I need to do. Did the forced merge solve the issue ? | New pull request that should correct the previous errors.
The load_real_data stills fails because it is looking for a default dataset URL that does not exists, this does not happen when loading the dataset with load_dataset | 20 | text: Add dataset/mlsum
New pull request that should correct the previous errors.
The load_real_data stills fails because it is looking for a default dataset URL that does not exists, this does not happen when loading the dataset with load_dataset
Hello, I am confused about the next steps I need to do. Did the forced merge solve the issue ? |
https://github.com/huggingface/datasets/pull/463 | Add dataset/mlsum | Hello :)
I think you can just rebase from master and it should solve the CI error | New pull request that should correct the previous errors.
The load_real_data stills fails because it is looking for a default dataset URL that does not exists, this does not happen when loading the dataset with load_dataset | 17 | text: Add dataset/mlsum
New pull request that should correct the previous errors.
The load_real_data stills fails because it is looking for a default dataset URL that does not exists, this does not happen when loading the dataset with load_dataset
Hello :)
I think you can just rebase from master and it should solve the CI error |
https://github.com/huggingface/datasets/pull/455 | Add bleurt | Sorry one nit: Could we use named arguments for the call to BLEURT?
i.e.
scores = self.scorer.score(references=references, candidates=predictions)
(i.e. so it is less bug prone)
| This PR adds the BLEURT metric to the library.
The BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`.
Note that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend using `bleurt-base-128` instead. I think it's safer to have our users have a functioning metric when they call the default behavior, we'll address discrepancies in the issues/discussions if it comes up.
In addition to the BLEURT file, `load.py` was changed so we can ask users to pip install the required packages from git when they have a `setup.py` but are not on PyPL
cc @ankparikh @tsellam | 25 | text: Add bleurt
This PR adds the BLEURT metric to the library.
The BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`.
Note that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend using `bleurt-base-128` instead. I think it's safer to have our users have a functioning metric when they call the default behavior, we'll address discrepancies in the issues/discussions if it comes up.
In addition to the BLEURT file, `load.py` was changed so we can ask users to pip install the required packages from git when they have a `setup.py` but are not on PyPL
cc @ankparikh @tsellam
Sorry one nit: Could we use named arguments for the call to BLEURT?
i.e.
scores = self.scorer.score(references=references, candidates=predictions)
(i.e. so it is less bug prone)
|
https://github.com/huggingface/datasets/pull/455 | Add bleurt | Following up on Ankur's comment---we are going to drop support for
positional (not named) arguments in the future releases because it seems to
cause bugs and confusion. I hope it doesn't create too much of a mess.
Le jeu. 30 juil. 2020 à 10:44, ankparikh <notifications@github.com> a
écrit :
> Sorry one nit: Could we use named arguments for the call to BLEURT?
>
> i.e.
> scores = self.scorer.score(references=references, candidates=predictions)
>
> (i.e. so it is less bug prone)
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/nlp/pull/455#issuecomment-666414514>, or
> unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABTMRNGAN2PMECS5K4DIHJDR6GBMLANCNFSM4PL323FA>
> .
>
| This PR adds the BLEURT metric to the library.
The BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`.
Note that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend using `bleurt-base-128` instead. I think it's safer to have our users have a functioning metric when they call the default behavior, we'll address discrepancies in the issues/discussions if it comes up.
In addition to the BLEURT file, `load.py` was changed so we can ask users to pip install the required packages from git when they have a `setup.py` but are not on PyPL
cc @ankparikh @tsellam | 112 | text: Add bleurt
This PR adds the BLEURT metric to the library.
The BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`.
Note that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend using `bleurt-base-128` instead. I think it's safer to have our users have a functioning metric when they call the default behavior, we'll address discrepancies in the issues/discussions if it comes up.
In addition to the BLEURT file, `load.py` was changed so we can ask users to pip install the required packages from git when they have a `setup.py` but are not on PyPL
cc @ankparikh @tsellam
Following up on Ankur's comment---we are going to drop support for
positional (not named) arguments in the future releases because it seems to
cause bugs and confusion. I hope it doesn't create too much of a mess.
Le jeu. 30 juil. 2020 à 10:44, ankparikh <notifications@github.com> a
écrit :
> Sorry one nit: Could we use named arguments for the call to BLEURT?
>
> i.e.
> scores = self.scorer.score(references=references, candidates=predictions)
>
> (i.e. so it is less bug prone)
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/nlp/pull/455#issuecomment-666414514>, or
> unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABTMRNGAN2PMECS5K4DIHJDR6GBMLANCNFSM4PL323FA>
> .
>
|
https://github.com/huggingface/datasets/pull/455 | Add bleurt | > Following up on Ankur's comment---we are going to drop support for positional (not named) arguments in the future releases because it seems to cause bugs and confusion. I hope it doesn't create too much of a mess. Le jeu. 30 juil. 2020 à 10:44, ankparikh <notifications@github.com> a écrit :
> […](#)
> Sorry one nit: Could we use named arguments for the call to BLEURT? i.e. scores = self.scorer.score(references=references, candidates=predictions) (i.e. so it is less bug prone) — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <[#455 (comment)](https://github.com/huggingface/nlp/pull/455#issuecomment-666414514)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABTMRNGAN2PMECS5K4DIHJDR6GBMLANCNFSM4PL323FA> .
Changed @ankparikh @tsellam, thanks for taking a look! | This PR adds the BLEURT metric to the library.
The BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`.
Note that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend using `bleurt-base-128` instead. I think it's safer to have our users have a functioning metric when they call the default behavior, we'll address discrepancies in the issues/discussions if it comes up.
In addition to the BLEURT file, `load.py` was changed so we can ask users to pip install the required packages from git when they have a `setup.py` but are not on PyPL
cc @ankparikh @tsellam | 110 | text: Add bleurt
This PR adds the BLEURT metric to the library.
The BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`.
Note that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend using `bleurt-base-128` instead. I think it's safer to have our users have a functioning metric when they call the default behavior, we'll address discrepancies in the issues/discussions if it comes up.
In addition to the BLEURT file, `load.py` was changed so we can ask users to pip install the required packages from git when they have a `setup.py` but are not on PyPL
cc @ankparikh @tsellam
> Following up on Ankur's comment---we are going to drop support for positional (not named) arguments in the future releases because it seems to cause bugs and confusion. I hope it doesn't create too much of a mess. Le jeu. 30 juil. 2020 à 10:44, ankparikh <notifications@github.com> a écrit :
> […](#)
> Sorry one nit: Could we use named arguments for the call to BLEURT? i.e. scores = self.scorer.score(references=references, candidates=predictions) (i.e. so it is less bug prone) — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <[#455 (comment)](https://github.com/huggingface/nlp/pull/455#issuecomment-666414514)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABTMRNGAN2PMECS5K4DIHJDR6GBMLANCNFSM4PL323FA> .
Changed @ankparikh @tsellam, thanks for taking a look! |
https://github.com/huggingface/datasets/pull/455 | Add bleurt | We should avoid positional arguments in metrics on our side as well. It's a dangerous source of errors indeed. | This PR adds the BLEURT metric to the library.
The BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`.
Note that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend using `bleurt-base-128` instead. I think it's safer to have our users have a functioning metric when they call the default behavior, we'll address discrepancies in the issues/discussions if it comes up.
In addition to the BLEURT file, `load.py` was changed so we can ask users to pip install the required packages from git when they have a `setup.py` but are not on PyPL
cc @ankparikh @tsellam | 19 | text: Add bleurt
This PR adds the BLEURT metric to the library.
The BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`.
Note that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend using `bleurt-base-128` instead. I think it's safer to have our users have a functioning metric when they call the default behavior, we'll address discrepancies in the issues/discussions if it comes up.
In addition to the BLEURT file, `load.py` was changed so we can ask users to pip install the required packages from git when they have a `setup.py` but are not on PyPL
cc @ankparikh @tsellam
We should avoid positional arguments in metrics on our side as well. It's a dangerous source of errors indeed. |
https://github.com/huggingface/datasets/pull/452 | Guardian authorship dataset | Hi ! Glad you managed to fix the version issue.
The command `
python nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs` is supposed to generate a json file `dataset_infos.json` next to your dataset script, but I can't see it in the PR.
Can you make sure you have the json file on your side and that you have pushed it ? | A new dataset: Guardian news articles for authorship attribution
**tests passed:**
python nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_guardian_authorship
**Tests failed:**
Real data: RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_guardian_authorship
output: __init__() missing 3 required positional arguments: 'train_folder', 'valid_folder', and 'tes...'
Remarks: This is the init function of my class. I am not sure why it passes in both my tests and with nlp-cli, but fails here. By the way, I ran this command with another 2 datasets and they failed:
* _glue - OSError: Cannot find data file.
*_newsgroup - FileNotFoundError: Local file datasets/newsgroup/dummy/18828_comp.graphics/3.0.0/dummy_data.zip doesn't exist
Thank you for letting us contribute to such a huge and important library!
EDIT:
I was able to fix the dummy_data issue. This dataset has around 14 configurations. I was testing with only 2, but their versions were not in a sequence, they were V1.0.0 and V.12.0.0. It seems that the testing code generates testes for all the versions from 0 to MAX, and was testing for versions (and dummy_data.zip files) that do not exist. I fixed that by changing the versions to 1 and 2.
| 59 | text: Guardian authorship dataset
A new dataset: Guardian news articles for authorship attribution
**tests passed:**
python nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_guardian_authorship
**Tests failed:**
Real data: RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_guardian_authorship
output: __init__() missing 3 required positional arguments: 'train_folder', 'valid_folder', and 'tes...'
Remarks: This is the init function of my class. I am not sure why it passes in both my tests and with nlp-cli, but fails here. By the way, I ran this command with another 2 datasets and they failed:
* _glue - OSError: Cannot find data file.
*_newsgroup - FileNotFoundError: Local file datasets/newsgroup/dummy/18828_comp.graphics/3.0.0/dummy_data.zip doesn't exist
Thank you for letting us contribute to such a huge and important library!
EDIT:
I was able to fix the dummy_data issue. This dataset has around 14 configurations. I was testing with only 2, but their versions were not in a sequence, they were V1.0.0 and V.12.0.0. It seems that the testing code generates testes for all the versions from 0 to MAX, and was testing for versions (and dummy_data.zip files) that do not exist. I fixed that by changing the versions to 1 and 2.
Hi ! Glad you managed to fix the version issue.
The command `
python nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs` is supposed to generate a json file `dataset_infos.json` next to your dataset script, but I can't see it in the PR.
Can you make sure you have the json file on your side and that you have pushed it ? |
https://github.com/huggingface/datasets/pull/452 | Guardian authorship dataset | Is there anything else that I should do? and would the new dataset be available via the NLP package now? | A new dataset: Guardian news articles for authorship attribution
**tests passed:**
python nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_guardian_authorship
**Tests failed:**
Real data: RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_guardian_authorship
output: __init__() missing 3 required positional arguments: 'train_folder', 'valid_folder', and 'tes...'
Remarks: This is the init function of my class. I am not sure why it passes in both my tests and with nlp-cli, but fails here. By the way, I ran this command with another 2 datasets and they failed:
* _glue - OSError: Cannot find data file.
*_newsgroup - FileNotFoundError: Local file datasets/newsgroup/dummy/18828_comp.graphics/3.0.0/dummy_data.zip doesn't exist
Thank you for letting us contribute to such a huge and important library!
EDIT:
I was able to fix the dummy_data issue. This dataset has around 14 configurations. I was testing with only 2, but their versions were not in a sequence, they were V1.0.0 and V.12.0.0. It seems that the testing code generates testes for all the versions from 0 to MAX, and was testing for versions (and dummy_data.zip files) that do not exist. I fixed that by changing the versions to 1 and 2.
| 20 | text: Guardian authorship dataset
A new dataset: Guardian news articles for authorship attribution
**tests passed:**
python nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_guardian_authorship
**Tests failed:**
Real data: RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_guardian_authorship
output: __init__() missing 3 required positional arguments: 'train_folder', 'valid_folder', and 'tes...'
Remarks: This is the init function of my class. I am not sure why it passes in both my tests and with nlp-cli, but fails here. By the way, I ran this command with another 2 datasets and they failed:
* _glue - OSError: Cannot find data file.
*_newsgroup - FileNotFoundError: Local file datasets/newsgroup/dummy/18828_comp.graphics/3.0.0/dummy_data.zip doesn't exist
Thank you for letting us contribute to such a huge and important library!
EDIT:
I was able to fix the dummy_data issue. This dataset has around 14 configurations. I was testing with only 2, but their versions were not in a sequence, they were V1.0.0 and V.12.0.0. It seems that the testing code generates testes for all the versions from 0 to MAX, and was testing for versions (and dummy_data.zip files) that do not exist. I fixed that by changing the versions to 1 and 2.
Is there anything else that I should do? and would the new dataset be available via the NLP package now? |
https://github.com/huggingface/datasets/pull/452 | Guardian authorship dataset | No worries, this is my first contribution to an online package, and I feel very proud it's part of this library :) Thank you very much! | A new dataset: Guardian news articles for authorship attribution
**tests passed:**
python nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_guardian_authorship
**Tests failed:**
Real data: RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_guardian_authorship
output: __init__() missing 3 required positional arguments: 'train_folder', 'valid_folder', and 'tes...'
Remarks: This is the init function of my class. I am not sure why it passes in both my tests and with nlp-cli, but fails here. By the way, I ran this command with another 2 datasets and they failed:
* _glue - OSError: Cannot find data file.
*_newsgroup - FileNotFoundError: Local file datasets/newsgroup/dummy/18828_comp.graphics/3.0.0/dummy_data.zip doesn't exist
Thank you for letting us contribute to such a huge and important library!
EDIT:
I was able to fix the dummy_data issue. This dataset has around 14 configurations. I was testing with only 2, but their versions were not in a sequence, they were V1.0.0 and V.12.0.0. It seems that the testing code generates testes for all the versions from 0 to MAX, and was testing for versions (and dummy_data.zip files) that do not exist. I fixed that by changing the versions to 1 and 2.
| 26 | text: Guardian authorship dataset
A new dataset: Guardian news articles for authorship attribution
**tests passed:**
python nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_guardian_authorship
**Tests failed:**
Real data: RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_guardian_authorship
output: __init__() missing 3 required positional arguments: 'train_folder', 'valid_folder', and 'tes...'
Remarks: This is the init function of my class. I am not sure why it passes in both my tests and with nlp-cli, but fails here. By the way, I ran this command with another 2 datasets and they failed:
* _glue - OSError: Cannot find data file.
*_newsgroup - FileNotFoundError: Local file datasets/newsgroup/dummy/18828_comp.graphics/3.0.0/dummy_data.zip doesn't exist
Thank you for letting us contribute to such a huge and important library!
EDIT:
I was able to fix the dummy_data issue. This dataset has around 14 configurations. I was testing with only 2, but their versions were not in a sequence, they were V1.0.0 and V.12.0.0. It seems that the testing code generates testes for all the versions from 0 to MAX, and was testing for versions (and dummy_data.zip files) that do not exist. I fixed that by changing the versions to 1 and 2.
No worries, this is my first contribution to an online package, and I feel very proud it's part of this library :) Thank you very much! |
https://github.com/huggingface/datasets/pull/451 | Fix csv/json/txt cache dir | I think this is the way to go but I’m afraid this might be a little slow. I was thinking that we could use a high quality very fast non crypto hash like xxhash for these stuff (hashing data files) | The cache dir for csv/json/txt datasets was always the same. This is an issue because it should be different depending on the data files provided by the user.
To fix that, I added a line that use the hash of the data files provided by the user to define the cache dir.
This should fix #444 | 40 | text: Fix csv/json/txt cache dir
The cache dir for csv/json/txt datasets was always the same. This is an issue because it should be different depending on the data files provided by the user.
To fix that, I added a line that use the hash of the data files provided by the user to define the cache dir.
This should fix #444
I think this is the way to go but I’m afraid this might be a little slow. I was thinking that we could use a high quality very fast non crypto hash like xxhash for these stuff (hashing data files) |
https://github.com/huggingface/datasets/pull/451 | Fix csv/json/txt cache dir | I tested the hashing speed [here](https://colab.research.google.com/drive/1hlhP84kLIHmOzMRQN1h8x10hKWpXXyud?usp=sharing).
I was able to get 8x speed with `xxhashlib` (42ms vs 345ms for 100MiB of data).
What do you think @thomwolf ? | The cache dir for csv/json/txt datasets was always the same. This is an issue because it should be different depending on the data files provided by the user.
To fix that, I added a line that use the hash of the data files provided by the user to define the cache dir.
This should fix #444 | 28 | text: Fix csv/json/txt cache dir
The cache dir for csv/json/txt datasets was always the same. This is an issue because it should be different depending on the data files provided by the user.
To fix that, I added a line that use the hash of the data files provided by the user to define the cache dir.
This should fix #444
I tested the hashing speed [here](https://colab.research.google.com/drive/1hlhP84kLIHmOzMRQN1h8x10hKWpXXyud?usp=sharing).
I was able to get 8x speed with `xxhashlib` (42ms vs 345ms for 100MiB of data).
What do you think @thomwolf ? |
https://github.com/huggingface/datasets/pull/449 | add reuters21578 dataset | > Awesome !
> Good job on parsing these files :O
>
> Do you think it would be hard to get the two other split configurations ?
It shouldn't be that hard, I think I can consider different config names for each split | This PR adds the `Reuters_21578` dataset https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html
#353
The datasets is a lit of `.sgm` files which are a bit different from xml file indeed `xml.etree` couldn't be used to read files. I consider them as text file (to avoid using external library) and read line by line (maybe there is a better way to do, happy to get your opinion on it)
In the Readme file 3 ways to split the dataset are given.:
- The Modified Lewis ("ModLewis") Split: train, test and unused-set
- The Modified Apte ("ModApte") Split : train, test and unused-set
- The Modified Hayes ("ModHayes") Split: train and test
Here I consider the last one as the readme file highlight that this split provides the ability to compare results with those of the 2 first splits.
| 44 | text: add reuters21578 dataset
This PR adds the `Reuters_21578` dataset https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html
#353
The datasets is a lit of `.sgm` files which are a bit different from xml file indeed `xml.etree` couldn't be used to read files. I consider them as text file (to avoid using external library) and read line by line (maybe there is a better way to do, happy to get your opinion on it)
In the Readme file 3 ways to split the dataset are given.:
- The Modified Lewis ("ModLewis") Split: train, test and unused-set
- The Modified Apte ("ModApte") Split : train, test and unused-set
- The Modified Hayes ("ModHayes") Split: train and test
Here I consider the last one as the readme file highlight that this split provides the ability to compare results with those of the 2 first splits.
> Awesome !
> Good job on parsing these files :O
>
> Do you think it would be hard to get the two other split configurations ?
It shouldn't be that hard, I think I can consider different config names for each split |
https://github.com/huggingface/datasets/pull/449 | add reuters21578 dataset | > > Awesome !
> > Good job on parsing these files :O
> > Do you think it would be hard to get the two other split configurations ?
>
> It shouldn't be that hard, I think I can consider different config names for each split
Yes that would be perfect | This PR adds the `Reuters_21578` dataset https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html
#353
The datasets is a lit of `.sgm` files which are a bit different from xml file indeed `xml.etree` couldn't be used to read files. I consider them as text file (to avoid using external library) and read line by line (maybe there is a better way to do, happy to get your opinion on it)
In the Readme file 3 ways to split the dataset are given.:
- The Modified Lewis ("ModLewis") Split: train, test and unused-set
- The Modified Apte ("ModApte") Split : train, test and unused-set
- The Modified Hayes ("ModHayes") Split: train and test
Here I consider the last one as the readme file highlight that this split provides the ability to compare results with those of the 2 first splits.
| 53 | text: add reuters21578 dataset
This PR adds the `Reuters_21578` dataset https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html
#353
The datasets is a lit of `.sgm` files which are a bit different from xml file indeed `xml.etree` couldn't be used to read files. I consider them as text file (to avoid using external library) and read line by line (maybe there is a better way to do, happy to get your opinion on it)
In the Readme file 3 ways to split the dataset are given.:
- The Modified Lewis ("ModLewis") Split: train, test and unused-set
- The Modified Apte ("ModApte") Split : train, test and unused-set
- The Modified Hayes ("ModHayes") Split: train and test
Here I consider the last one as the readme file highlight that this split provides the ability to compare results with those of the 2 first splits.
> > Awesome !
> > Good job on parsing these files :O
> > Do you think it would be hard to get the two other split configurations ?
>
> It shouldn't be that hard, I think I can consider different config names for each split
Yes that would be perfect |
https://github.com/huggingface/datasets/pull/448 | add aws load metric test | Could you run `make style` to fix the code_quality fail ?
You'll need `black` and `isort` that you can install by doing `pip install -e .[quality]` | Following issue #445
Added a test to recognize import errors of all metrics | 26 | text: add aws load metric test
Following issue #445
Added a test to recognize import errors of all metrics
Could you run `make style` to fix the code_quality fail ?
You'll need `black` and `isort` that you can install by doing `pip install -e .[quality]` |
https://github.com/huggingface/datasets/pull/441 | Add features parameter in load dataset | I changed to using features only, instead of info.
Let mw know if it sounds good to you now @thomwolf | Added `features` argument in `nlp.load_dataset`.
If they don't match the data type, it raises a `ValueError`.
It's a draft PR because #440 needs to be merged first. | 20 | text: Add features parameter in load dataset
Added `features` argument in `nlp.load_dataset`.
If they don't match the data type, it raises a `ValueError`.
It's a draft PR because #440 needs to be merged first.
I changed to using features only, instead of info.
Let mw know if it sounds good to you now @thomwolf |
https://github.com/huggingface/datasets/pull/437 | Fix XTREME PAN-X loading | There is an interesting design question here (cc @lhoestq).
I guess the labels form a closed set so we could also use a [nlp.ClassLabel](https://huggingface.co/nlp/package_reference/main_classes.html#nlp.ClassLabel) instead of a string. The differences will be mainly that:
- the labels are stored as integers and thus ready for training a model
- the string to int conversion methods are handled by the `nlp.ClassLabel` feature (see the [doc](https://huggingface.co/nlp/package_reference/main_classes.html#nlp.ClassLabel) and [here](https://huggingface.co/nlp/features.html) and [here](https://huggingface.co/nlp/quicktour.html#fine-tuning-a-deep-learning-model)).
In my opinion, storing the labels as integers instead of strings makes it:
- slightly less readable when accessing a dataset example (e.g. with `dataset[0]`)
- force you with a specific mapping from string to integers
- more clear that there is a fixed and predefined list of labels
- easier to list all the labels (directly visible in the features).
=> overall I'm pretty neutral about using one or the other option (`nlp.string` or `nlp.ClassLabel`).
Note that we can now rather easily convert from one to the other with the map function and something like:
```python
dataset = dataset.map(lambda x: x, features=nlp.Features({'labels': nlp.ClassLabel(MY_LABELS_NAMES)}))
dataset = dataset.map(lambda x: {'labels': dataset.features['labels'].int2str(x['labels'])}, features=nlp.Features({'labels': nlp.Value('string')}))
```
^^ this could probably be made even simpler (in particular for the second case) | Hi 🤗
In response to the discussion in #425 @lewtun and I made some fixes to the repo. In the original XTREME implementation the PAN-X dataset for named entity recognition loaded each word/tag pair as a single row and the sentence relation was lost. With the fix each row contains the list of all words in a single sentence and their NER tags. This is also in agreement with the [NER example](https://github.com/huggingface/transformers/tree/master/examples/token-classification) in the transformers repo.
With the fix the output of the dataset should look as follows:
```python
>>> dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
>>> dataset['train'][0]
{'words': ['R.H.', 'Saunders', '(', 'St.', 'Lawrence', 'River', ')', '(', '968', 'MW', ')'],
'ner_tags': ['B-ORG', 'I-ORG', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O'],
'langs': ['en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en']}
``` | 195 | text: Fix XTREME PAN-X loading
Hi 🤗
In response to the discussion in #425 @lewtun and I made some fixes to the repo. In the original XTREME implementation the PAN-X dataset for named entity recognition loaded each word/tag pair as a single row and the sentence relation was lost. With the fix each row contains the list of all words in a single sentence and their NER tags. This is also in agreement with the [NER example](https://github.com/huggingface/transformers/tree/master/examples/token-classification) in the transformers repo.
With the fix the output of the dataset should look as follows:
```python
>>> dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
>>> dataset['train'][0]
{'words': ['R.H.', 'Saunders', '(', 'St.', 'Lawrence', 'River', ')', '(', '968', 'MW', ')'],
'ner_tags': ['B-ORG', 'I-ORG', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O'],
'langs': ['en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en']}
```
There is an interesting design question here (cc @lhoestq).
I guess the labels form a closed set so we could also use a [nlp.ClassLabel](https://huggingface.co/nlp/package_reference/main_classes.html#nlp.ClassLabel) instead of a string. The differences will be mainly that:
- the labels are stored as integers and thus ready for training a model
- the string to int conversion methods are handled by the `nlp.ClassLabel` feature (see the [doc](https://huggingface.co/nlp/package_reference/main_classes.html#nlp.ClassLabel) and [here](https://huggingface.co/nlp/features.html) and [here](https://huggingface.co/nlp/quicktour.html#fine-tuning-a-deep-learning-model)).
In my opinion, storing the labels as integers instead of strings makes it:
- slightly less readable when accessing a dataset example (e.g. with `dataset[0]`)
- force you with a specific mapping from string to integers
- more clear that there is a fixed and predefined list of labels
- easier to list all the labels (directly visible in the features).
=> overall I'm pretty neutral about using one or the other option (`nlp.string` or `nlp.ClassLabel`).
Note that we can now rather easily convert from one to the other with the map function and something like:
```python
dataset = dataset.map(lambda x: x, features=nlp.Features({'labels': nlp.ClassLabel(MY_LABELS_NAMES)}))
dataset = dataset.map(lambda x: {'labels': dataset.features['labels'].int2str(x['labels'])}, features=nlp.Features({'labels': nlp.Value('string')}))
```
^^ this could probably be made even simpler (in particular for the second case) |
https://github.com/huggingface/datasets/pull/437 | Fix XTREME PAN-X loading | I see. This is an interesting question.
Maybe as the dataset doesn't provide the mapping we shouldn't force an arbitrary one, and keep them as strings ?
Moreover for NER the labels are often different from a dataset to the other so it's probably good to keep strings (there is no conventional mapping).
Also as the column is called "ner_tags" (or "langs"), you can already assume that there is a fixed and predefined list of labels. | Hi 🤗
In response to the discussion in #425 @lewtun and I made some fixes to the repo. In the original XTREME implementation the PAN-X dataset for named entity recognition loaded each word/tag pair as a single row and the sentence relation was lost. With the fix each row contains the list of all words in a single sentence and their NER tags. This is also in agreement with the [NER example](https://github.com/huggingface/transformers/tree/master/examples/token-classification) in the transformers repo.
With the fix the output of the dataset should look as follows:
```python
>>> dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
>>> dataset['train'][0]
{'words': ['R.H.', 'Saunders', '(', 'St.', 'Lawrence', 'River', ')', '(', '968', 'MW', ')'],
'ner_tags': ['B-ORG', 'I-ORG', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O'],
'langs': ['en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en']}
``` | 76 | text: Fix XTREME PAN-X loading
Hi 🤗
In response to the discussion in #425 @lewtun and I made some fixes to the repo. In the original XTREME implementation the PAN-X dataset for named entity recognition loaded each word/tag pair as a single row and the sentence relation was lost. With the fix each row contains the list of all words in a single sentence and their NER tags. This is also in agreement with the [NER example](https://github.com/huggingface/transformers/tree/master/examples/token-classification) in the transformers repo.
With the fix the output of the dataset should look as follows:
```python
>>> dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
>>> dataset['train'][0]
{'words': ['R.H.', 'Saunders', '(', 'St.', 'Lawrence', 'River', ')', '(', '968', 'MW', ')'],
'ner_tags': ['B-ORG', 'I-ORG', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O'],
'langs': ['en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en']}
```
I see. This is an interesting question.
Maybe as the dataset doesn't provide the mapping we shouldn't force an arbitrary one, and keep them as strings ?
Moreover for NER the labels are often different from a dataset to the other so it's probably good to keep strings (there is no conventional mapping).
Also as the column is called "ner_tags" (or "langs"), you can already assume that there is a fixed and predefined list of labels. |
https://github.com/huggingface/datasets/pull/437 | Fix XTREME PAN-X loading | Yes sounds good to me.
This make me wonder if we don’t want to have a default identity function in `map` so this method could also be used to easily cast features. What do you think? | Hi 🤗
In response to the discussion in #425 @lewtun and I made some fixes to the repo. In the original XTREME implementation the PAN-X dataset for named entity recognition loaded each word/tag pair as a single row and the sentence relation was lost. With the fix each row contains the list of all words in a single sentence and their NER tags. This is also in agreement with the [NER example](https://github.com/huggingface/transformers/tree/master/examples/token-classification) in the transformers repo.
With the fix the output of the dataset should look as follows:
```python
>>> dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
>>> dataset['train'][0]
{'words': ['R.H.', 'Saunders', '(', 'St.', 'Lawrence', 'River', ')', '(', '968', 'MW', ')'],
'ner_tags': ['B-ORG', 'I-ORG', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O'],
'langs': ['en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en']}
``` | 36 | text: Fix XTREME PAN-X loading
Hi 🤗
In response to the discussion in #425 @lewtun and I made some fixes to the repo. In the original XTREME implementation the PAN-X dataset for named entity recognition loaded each word/tag pair as a single row and the sentence relation was lost. With the fix each row contains the list of all words in a single sentence and their NER tags. This is also in agreement with the [NER example](https://github.com/huggingface/transformers/tree/master/examples/token-classification) in the transformers repo.
With the fix the output of the dataset should look as follows:
```python
>>> dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
>>> dataset['train'][0]
{'words': ['R.H.', 'Saunders', '(', 'St.', 'Lawrence', 'River', ')', '(', '968', 'MW', ')'],
'ner_tags': ['B-ORG', 'I-ORG', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O'],
'langs': ['en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en']}
```
Yes sounds good to me.
This make me wonder if we don’t want to have a default identity function in `map` so this method could also be used to easily cast features. What do you think? |
https://github.com/huggingface/datasets/pull/437 | Fix XTREME PAN-X loading | Yes sounds good. I also noticed that people use map with identity to write a dataset into a specified cache file. | Hi 🤗
In response to the discussion in #425 @lewtun and I made some fixes to the repo. In the original XTREME implementation the PAN-X dataset for named entity recognition loaded each word/tag pair as a single row and the sentence relation was lost. With the fix each row contains the list of all words in a single sentence and their NER tags. This is also in agreement with the [NER example](https://github.com/huggingface/transformers/tree/master/examples/token-classification) in the transformers repo.
With the fix the output of the dataset should look as follows:
```python
>>> dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
>>> dataset['train'][0]
{'words': ['R.H.', 'Saunders', '(', 'St.', 'Lawrence', 'River', ')', '(', '968', 'MW', ')'],
'ner_tags': ['B-ORG', 'I-ORG', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O'],
'langs': ['en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en']}
``` | 21 | text: Fix XTREME PAN-X loading
Hi 🤗
In response to the discussion in #425 @lewtun and I made some fixes to the repo. In the original XTREME implementation the PAN-X dataset for named entity recognition loaded each word/tag pair as a single row and the sentence relation was lost. With the fix each row contains the list of all words in a single sentence and their NER tags. This is also in agreement with the [NER example](https://github.com/huggingface/transformers/tree/master/examples/token-classification) in the transformers repo.
With the fix the output of the dataset should look as follows:
```python
>>> dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
>>> dataset['train'][0]
{'words': ['R.H.', 'Saunders', '(', 'St.', 'Lawrence', 'River', ')', '(', '968', 'MW', ')'],
'ner_tags': ['B-ORG', 'I-ORG', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O'],
'langs': ['en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en']}
```
Yes sounds good. I also noticed that people use map with identity to write a dataset into a specified cache file. |
https://github.com/huggingface/datasets/pull/432 | Fix handling of config files while loading datasets from multiple processes | Ok for this but I think we may want to use the general `filelock` method we are using at other places in the library instead of filecmp (in particular `filelock` take care of being an atomic operation which is safer for concurrent processes) | When loading shards on several processes, each process upon loading the dataset will overwrite dataset_infos.json in <package path>/datasets/<dataset name>/<hash>/dataset_infos.json. It does so every time, even when the target file already exists and is identical. Because multiple processes rewrite the same file in parallel, it creates a race condition when a process tries to load the file, often resulting in a JSON decoding exception because the file is only partially written.
This pull requests partially address this by comparing if the files are already identical before copying over the downloaded copy to the cached destination. There's still a race condition, but now it's less likely to occur if some basic precautions are taken by the library user, e.g., download all datasets to cache before spawning multiple processes. | 43 | text: Fix handling of config files while loading datasets from multiple processes
When loading shards on several processes, each process upon loading the dataset will overwrite dataset_infos.json in <package path>/datasets/<dataset name>/<hash>/dataset_infos.json. It does so every time, even when the target file already exists and is identical. Because multiple processes rewrite the same file in parallel, it creates a race condition when a process tries to load the file, often resulting in a JSON decoding exception because the file is only partially written.
This pull requests partially address this by comparing if the files are already identical before copying over the downloaded copy to the cached destination. There's still a race condition, but now it's less likely to occur if some basic precautions are taken by the library user, e.g., download all datasets to cache before spawning multiple processes.
Ok for this but I think we may want to use the general `filelock` method we are using at other places in the library instead of filecmp (in particular `filelock` take care of being an atomic operation which is safer for concurrent processes) |
https://github.com/huggingface/datasets/pull/432 | Fix handling of config files while loading datasets from multiple processes | Thanks for approving my patch.
I agree that if copying is needed then some locking mechanism should be put in place. But, I don't think a file should be needlessly copied without a check. So I guess the flow should be, lock => copy if needed => unlock, and add locks wherever else that file is being accessed.
I'll also add that my personal experience with filelock on a different project hasn't been that great, and on some occasions a process somehow got through the lock -- I've never gotten to the bottom of that but it tainted my view of that module. Perhaps it's been fixed (or I just miss used it), but thought you should know to take steps to test it. | When loading shards on several processes, each process upon loading the dataset will overwrite dataset_infos.json in <package path>/datasets/<dataset name>/<hash>/dataset_infos.json. It does so every time, even when the target file already exists and is identical. Because multiple processes rewrite the same file in parallel, it creates a race condition when a process tries to load the file, often resulting in a JSON decoding exception because the file is only partially written.
This pull requests partially address this by comparing if the files are already identical before copying over the downloaded copy to the cached destination. There's still a race condition, but now it's less likely to occur if some basic precautions are taken by the library user, e.g., download all datasets to cache before spawning multiple processes. | 124 | text: Fix handling of config files while loading datasets from multiple processes
When loading shards on several processes, each process upon loading the dataset will overwrite dataset_infos.json in <package path>/datasets/<dataset name>/<hash>/dataset_infos.json. It does so every time, even when the target file already exists and is identical. Because multiple processes rewrite the same file in parallel, it creates a race condition when a process tries to load the file, often resulting in a JSON decoding exception because the file is only partially written.
This pull requests partially address this by comparing if the files are already identical before copying over the downloaded copy to the cached destination. There's still a race condition, but now it's less likely to occur if some basic precautions are taken by the library user, e.g., download all datasets to cache before spawning multiple processes.
Thanks for approving my patch.
I agree that if copying is needed then some locking mechanism should be put in place. But, I don't think a file should be needlessly copied without a check. So I guess the flow should be, lock => copy if needed => unlock, and add locks wherever else that file is being accessed.
I'll also add that my personal experience with filelock on a different project hasn't been that great, and on some occasions a process somehow got through the lock -- I've never gotten to the bottom of that but it tainted my view of that module. Perhaps it's been fixed (or I just miss used it), but thought you should know to take steps to test it. |
https://github.com/huggingface/datasets/pull/431 | Specify split post processing + Add post processing resources downloading | I was using a hack in `wiki_dpr` to download the index from GCS even for the configurations without the embeddings.
However as GCS is something internal, I changed the logic to add a download step for indexes directly in the dataset script, using the `DownloadManager`.
This change was directly linked to the changes I did to take into account the split name in the post processing, so I included this change in this PR too.
To summarize:
Dataset builders can now implement
- `_post_processing_resources(split)`: return a dict `resource_name -> resource_file_name`. It defines the additional resources such as indexes or arrow files that you need in post processing
- `_download_post_processing_resources(split, resource_name, dl_manager))`: if some resources can be downloaded, you can use the download_manager to download them
- `_post_process(dataset, resources_path)`: (main function for post processing) given a dataset, you can apply dataset transforms or add indexes. For resources that have been downloaded, you can load them. For the others, you can generate and save them. The paths to load/save resources are in `resources_path` which is a dictionary `resource_name -> resource_path`
About the CI:
```
=========================== short test summary info ============================
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr
```
It fails because I changed the input of post processing functions (to include the split name) | Previously if you tried to do
```python
from nlp import load_dataset
wiki = load_dataset("wiki_dpr", "psgs_w100_with_nq_embeddings", split="train[:100]", with_index=True)
```
Then you'd get an error `Index size should match Dataset size...`
This was because it was trying to use the full index (21M elements).
To fix that I made it so post processing resources can be named according to the split.
I'm going to add tests on post processing too.
Note that the CI will fail as I added a new argument in `_post_processing_resources`: the AWS version of wiki_dpr fails, and there's also an error telling that it is not synced (it'll be synced once it's merged):
```
=========================== short test summary info ============================
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr
FAILED tests/test_hf_gcp.py::TestDatasetSynced::test_script_synced_with_s3_wiki_dpr
```
EDIT: I did a change to ignore the script hash to locate the arrow files on GCS, so I removed the sync test. It was there just because of the hash logic for files on GCS | 207 | text: Specify split post processing + Add post processing resources downloading
Previously if you tried to do
```python
from nlp import load_dataset
wiki = load_dataset("wiki_dpr", "psgs_w100_with_nq_embeddings", split="train[:100]", with_index=True)
```
Then you'd get an error `Index size should match Dataset size...`
This was because it was trying to use the full index (21M elements).
To fix that I made it so post processing resources can be named according to the split.
I'm going to add tests on post processing too.
Note that the CI will fail as I added a new argument in `_post_processing_resources`: the AWS version of wiki_dpr fails, and there's also an error telling that it is not synced (it'll be synced once it's merged):
```
=========================== short test summary info ============================
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr
FAILED tests/test_hf_gcp.py::TestDatasetSynced::test_script_synced_with_s3_wiki_dpr
```
EDIT: I did a change to ignore the script hash to locate the arrow files on GCS, so I removed the sync test. It was there just because of the hash logic for files on GCS
I was using a hack in `wiki_dpr` to download the index from GCS even for the configurations without the embeddings.
However as GCS is something internal, I changed the logic to add a download step for indexes directly in the dataset script, using the `DownloadManager`.
This change was directly linked to the changes I did to take into account the split name in the post processing, so I included this change in this PR too.
To summarize:
Dataset builders can now implement
- `_post_processing_resources(split)`: return a dict `resource_name -> resource_file_name`. It defines the additional resources such as indexes or arrow files that you need in post processing
- `_download_post_processing_resources(split, resource_name, dl_manager))`: if some resources can be downloaded, you can use the download_manager to download them
- `_post_process(dataset, resources_path)`: (main function for post processing) given a dataset, you can apply dataset transforms or add indexes. For resources that have been downloaded, you can load them. For the others, you can generate and save them. The paths to load/save resources are in `resources_path` which is a dictionary `resource_name -> resource_path`
About the CI:
```
=========================== short test summary info ============================
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr
```
It fails because I changed the input of post processing functions (to include the split name) |
https://github.com/huggingface/datasets/pull/431 | Specify split post processing + Add post processing resources downloading | I started to add metadata in the DatasetInfo.
Note that because there are new fields, **ALL the dataset_info[s].json generated after these changes won't be loadable from older versions of the lib**
Right now it looks like this:
```json
"post_processing_resources_checksums": {
"train": {
"embeddings_index": {
"num_bytes": 30720045,
"checksum": "b04fb4f4f3ab83b9d1b9f6f9eb236f1c04a9fd61bef7cee16b12df8ac911766a"
}
}
},
"post_processing_size": 30720045,
``` | Previously if you tried to do
```python
from nlp import load_dataset
wiki = load_dataset("wiki_dpr", "psgs_w100_with_nq_embeddings", split="train[:100]", with_index=True)
```
Then you'd get an error `Index size should match Dataset size...`
This was because it was trying to use the full index (21M elements).
To fix that I made it so post processing resources can be named according to the split.
I'm going to add tests on post processing too.
Note that the CI will fail as I added a new argument in `_post_processing_resources`: the AWS version of wiki_dpr fails, and there's also an error telling that it is not synced (it'll be synced once it's merged):
```
=========================== short test summary info ============================
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr
FAILED tests/test_hf_gcp.py::TestDatasetSynced::test_script_synced_with_s3_wiki_dpr
```
EDIT: I did a change to ignore the script hash to locate the arrow files on GCS, so I removed the sync test. It was there just because of the hash logic for files on GCS | 54 | text: Specify split post processing + Add post processing resources downloading
Previously if you tried to do
```python
from nlp import load_dataset
wiki = load_dataset("wiki_dpr", "psgs_w100_with_nq_embeddings", split="train[:100]", with_index=True)
```
Then you'd get an error `Index size should match Dataset size...`
This was because it was trying to use the full index (21M elements).
To fix that I made it so post processing resources can be named according to the split.
I'm going to add tests on post processing too.
Note that the CI will fail as I added a new argument in `_post_processing_resources`: the AWS version of wiki_dpr fails, and there's also an error telling that it is not synced (it'll be synced once it's merged):
```
=========================== short test summary info ============================
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr
FAILED tests/test_hf_gcp.py::TestDatasetSynced::test_script_synced_with_s3_wiki_dpr
```
EDIT: I did a change to ignore the script hash to locate the arrow files on GCS, so I removed the sync test. It was there just because of the hash logic for files on GCS
I started to add metadata in the DatasetInfo.
Note that because there are new fields, **ALL the dataset_info[s].json generated after these changes won't be loadable from older versions of the lib**
Right now it looks like this:
```json
"post_processing_resources_checksums": {
"train": {
"embeddings_index": {
"num_bytes": 30720045,
"checksum": "b04fb4f4f3ab83b9d1b9f6f9eb236f1c04a9fd61bef7cee16b12df8ac911766a"
}
}
},
"post_processing_size": 30720045,
``` |
https://github.com/huggingface/datasets/pull/431 | Specify split post processing + Add post processing resources downloading | Good point. Should we anticipate already that we may add other fields in the future and change the code to support the addition of new fields without breaking backward compatibility in the future? | Previously if you tried to do
```python
from nlp import load_dataset
wiki = load_dataset("wiki_dpr", "psgs_w100_with_nq_embeddings", split="train[:100]", with_index=True)
```
Then you'd get an error `Index size should match Dataset size...`
This was because it was trying to use the full index (21M elements).
To fix that I made it so post processing resources can be named according to the split.
I'm going to add tests on post processing too.
Note that the CI will fail as I added a new argument in `_post_processing_resources`: the AWS version of wiki_dpr fails, and there's also an error telling that it is not synced (it'll be synced once it's merged):
```
=========================== short test summary info ============================
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr
FAILED tests/test_hf_gcp.py::TestDatasetSynced::test_script_synced_with_s3_wiki_dpr
```
EDIT: I did a change to ignore the script hash to locate the arrow files on GCS, so I removed the sync test. It was there just because of the hash logic for files on GCS | 33 | text: Specify split post processing + Add post processing resources downloading
Previously if you tried to do
```python
from nlp import load_dataset
wiki = load_dataset("wiki_dpr", "psgs_w100_with_nq_embeddings", split="train[:100]", with_index=True)
```
Then you'd get an error `Index size should match Dataset size...`
This was because it was trying to use the full index (21M elements).
To fix that I made it so post processing resources can be named according to the split.
I'm going to add tests on post processing too.
Note that the CI will fail as I added a new argument in `_post_processing_resources`: the AWS version of wiki_dpr fails, and there's also an error telling that it is not synced (it'll be synced once it's merged):
```
=========================== short test summary info ============================
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr
FAILED tests/test_hf_gcp.py::TestDatasetSynced::test_script_synced_with_s3_wiki_dpr
```
EDIT: I did a change to ignore the script hash to locate the arrow files on GCS, so I removed the sync test. It was there just because of the hash logic for files on GCS
Good point. Should we anticipate already that we may add other fields in the future and change the code to support the addition of new fields without breaking backward compatibility in the future? |
https://github.com/huggingface/datasets/pull/431 | Specify split post processing + Add post processing resources downloading | I added:
- post processing features (inside a PostProcessedInfo object)
- backward compatibility for dataset info
- post processing tests (as_dataset and download_and_prepare) for map (change features), select (change number of elements) and add_faiss_index (add indexes)
And I fixed a bug in `map` that I found thanks to the new tests
Now I just have to move `post_processing_resources_checksums` to PostProcessedInfo as well and everything should be good :)
Edit: done | Previously if you tried to do
```python
from nlp import load_dataset
wiki = load_dataset("wiki_dpr", "psgs_w100_with_nq_embeddings", split="train[:100]", with_index=True)
```
Then you'd get an error `Index size should match Dataset size...`
This was because it was trying to use the full index (21M elements).
To fix that I made it so post processing resources can be named according to the split.
I'm going to add tests on post processing too.
Note that the CI will fail as I added a new argument in `_post_processing_resources`: the AWS version of wiki_dpr fails, and there's also an error telling that it is not synced (it'll be synced once it's merged):
```
=========================== short test summary info ============================
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr
FAILED tests/test_hf_gcp.py::TestDatasetSynced::test_script_synced_with_s3_wiki_dpr
```
EDIT: I did a change to ignore the script hash to locate the arrow files on GCS, so I removed the sync test. It was there just because of the hash logic for files on GCS | 70 | text: Specify split post processing + Add post processing resources downloading
Previously if you tried to do
```python
from nlp import load_dataset
wiki = load_dataset("wiki_dpr", "psgs_w100_with_nq_embeddings", split="train[:100]", with_index=True)
```
Then you'd get an error `Index size should match Dataset size...`
This was because it was trying to use the full index (21M elements).
To fix that I made it so post processing resources can be named according to the split.
I'm going to add tests on post processing too.
Note that the CI will fail as I added a new argument in `_post_processing_resources`: the AWS version of wiki_dpr fails, and there's also an error telling that it is not synced (it'll be synced once it's merged):
```
=========================== short test summary info ============================
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr
FAILED tests/test_hf_gcp.py::TestDatasetSynced::test_script_synced_with_s3_wiki_dpr
```
EDIT: I did a change to ignore the script hash to locate the arrow files on GCS, so I removed the sync test. It was there just because of the hash logic for files on GCS
I added:
- post processing features (inside a PostProcessedInfo object)
- backward compatibility for dataset info
- post processing tests (as_dataset and download_and_prepare) for map (change features), select (change number of elements) and add_faiss_index (add indexes)
And I fixed a bug in `map` that I found thanks to the new tests
Now I just have to move `post_processing_resources_checksums` to PostProcessedInfo as well and everything should be good :)
Edit: done |
https://github.com/huggingface/datasets/pull/430 | add DatasetDict | I did the changes in the docstrings and I added a type check in each `DatasetDict` method to make sure all values are of type `Dataset` | ## Add DatasetDict
### Overview
When you call `load_dataset` it can return a dictionary of datasets if there are several splits (train/test for example).
If you wanted to apply dataset transforms you had to iterate over each split and apply the transform.
Instead of returning a dict, it now returns a `nlp.DatasetDict` object which inherits from dict and contains the same data as before, except that now users can call dataset transforms directly from the output, and they'll be applied on each split.
Before:
```python
from nlp import load_dataset
squad = load_dataset("squad")
print(squad.keys())
# dict_keys(['train', 'validation'])
squad = {
split_name: dataset.map(my_func) for split_name, dataset in squad.items()
}
print(squad.keys())
# dict_keys(['train', 'validation'])
```
Now:
```python
from nlp import load_dataset
squad = load_dataset("squad")
print(squad.keys())
# dict_keys(['train', 'validation'])
squad = squad.map(my_func)
print(squad.keys())
# dict_keys(['train', 'validation'])
```
### Dataset transforms
`nlp.DatasetDict` implements the following dataset transforms:
- map
- filter
- sort
- shuffle
### Arguments
The arguments of the methods are the same except for split-specific arguments like `cache_file_name`.
For such arguments, the expected input is a dictionary `{split_name: argument_value}`
It concerns:
- `cache_file_name` in map, filter, sort, shuffle
- `seed` and `generator` in shuffle | 26 | text: add DatasetDict
## Add DatasetDict
### Overview
When you call `load_dataset` it can return a dictionary of datasets if there are several splits (train/test for example).
If you wanted to apply dataset transforms you had to iterate over each split and apply the transform.
Instead of returning a dict, it now returns a `nlp.DatasetDict` object which inherits from dict and contains the same data as before, except that now users can call dataset transforms directly from the output, and they'll be applied on each split.
Before:
```python
from nlp import load_dataset
squad = load_dataset("squad")
print(squad.keys())
# dict_keys(['train', 'validation'])
squad = {
split_name: dataset.map(my_func) for split_name, dataset in squad.items()
}
print(squad.keys())
# dict_keys(['train', 'validation'])
```
Now:
```python
from nlp import load_dataset
squad = load_dataset("squad")
print(squad.keys())
# dict_keys(['train', 'validation'])
squad = squad.map(my_func)
print(squad.keys())
# dict_keys(['train', 'validation'])
```
### Dataset transforms
`nlp.DatasetDict` implements the following dataset transforms:
- map
- filter
- sort
- shuffle
### Arguments
The arguments of the methods are the same except for split-specific arguments like `cache_file_name`.
For such arguments, the expected input is a dictionary `{split_name: argument_value}`
It concerns:
- `cache_file_name` in map, filter, sort, shuffle
- `seed` and `generator` in shuffle
I did the changes in the docstrings and I added a type check in each `DatasetDict` method to make sure all values are of type `Dataset` |
https://github.com/huggingface/datasets/pull/430 | add DatasetDict | I'm trying to follow along with the following about datasets from the docs:
https://huggingface.co/nlp/loading_datasets.html
https://huggingface.co/nlp/processing.html
However the train_test_split method no longer works as it is expecting a dataset, rather than a datsetdict. How would I got about splitting a CSV into a train and test set?
I'm trying to utilize the Trainer() class, but am having trouble converting my data from a csv into dataset objects to pass in. | ## Add DatasetDict
### Overview
When you call `load_dataset` it can return a dictionary of datasets if there are several splits (train/test for example).
If you wanted to apply dataset transforms you had to iterate over each split and apply the transform.
Instead of returning a dict, it now returns a `nlp.DatasetDict` object which inherits from dict and contains the same data as before, except that now users can call dataset transforms directly from the output, and they'll be applied on each split.
Before:
```python
from nlp import load_dataset
squad = load_dataset("squad")
print(squad.keys())
# dict_keys(['train', 'validation'])
squad = {
split_name: dataset.map(my_func) for split_name, dataset in squad.items()
}
print(squad.keys())
# dict_keys(['train', 'validation'])
```
Now:
```python
from nlp import load_dataset
squad = load_dataset("squad")
print(squad.keys())
# dict_keys(['train', 'validation'])
squad = squad.map(my_func)
print(squad.keys())
# dict_keys(['train', 'validation'])
```
### Dataset transforms
`nlp.DatasetDict` implements the following dataset transforms:
- map
- filter
- sort
- shuffle
### Arguments
The arguments of the methods are the same except for split-specific arguments like `cache_file_name`.
For such arguments, the expected input is a dictionary `{split_name: argument_value}`
It concerns:
- `cache_file_name` in map, filter, sort, shuffle
- `seed` and `generator` in shuffle | 69 | text: add DatasetDict
## Add DatasetDict
### Overview
When you call `load_dataset` it can return a dictionary of datasets if there are several splits (train/test for example).
If you wanted to apply dataset transforms you had to iterate over each split and apply the transform.
Instead of returning a dict, it now returns a `nlp.DatasetDict` object which inherits from dict and contains the same data as before, except that now users can call dataset transforms directly from the output, and they'll be applied on each split.
Before:
```python
from nlp import load_dataset
squad = load_dataset("squad")
print(squad.keys())
# dict_keys(['train', 'validation'])
squad = {
split_name: dataset.map(my_func) for split_name, dataset in squad.items()
}
print(squad.keys())
# dict_keys(['train', 'validation'])
```
Now:
```python
from nlp import load_dataset
squad = load_dataset("squad")
print(squad.keys())
# dict_keys(['train', 'validation'])
squad = squad.map(my_func)
print(squad.keys())
# dict_keys(['train', 'validation'])
```
### Dataset transforms
`nlp.DatasetDict` implements the following dataset transforms:
- map
- filter
- sort
- shuffle
### Arguments
The arguments of the methods are the same except for split-specific arguments like `cache_file_name`.
For such arguments, the expected input is a dictionary `{split_name: argument_value}`
It concerns:
- `cache_file_name` in map, filter, sort, shuffle
- `seed` and `generator` in shuffle
I'm trying to follow along with the following about datasets from the docs:
https://huggingface.co/nlp/loading_datasets.html
https://huggingface.co/nlp/processing.html
However the train_test_split method no longer works as it is expecting a dataset, rather than a datsetdict. How would I got about splitting a CSV into a train and test set?
I'm trying to utilize the Trainer() class, but am having trouble converting my data from a csv into dataset objects to pass in. |
https://github.com/huggingface/datasets/pull/429 | mlsum | Thanks @RachelKer for this PR.
I think the dummy_data structure does not also match. In the `_split_generator` you have something like `os.path.join(downloaded_files["validation"], lang+'_val.jsonl')` but in you dummy_data you have `os.path.join(downloaded_files["validation"], lang+"_val.zip", lang+'_val.jsonl')`. I think ` jsonl` files should be directly in the `dummy_data` folder without the sub-folder
@lhoestq | Hello,
The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https://gitlab.lip6.fr/scialom/mlsum_data | 48 | text: mlsum
Hello,
The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https://gitlab.lip6.fr/scialom/mlsum_data
Thanks @RachelKer for this PR.
I think the dummy_data structure does not also match. In the `_split_generator` you have something like `os.path.join(downloaded_files["validation"], lang+'_val.jsonl')` but in you dummy_data you have `os.path.join(downloaded_files["validation"], lang+"_val.zip", lang+'_val.jsonl')`. I think ` jsonl` files should be directly in the `dummy_data` folder without the sub-folder
@lhoestq |
https://github.com/huggingface/datasets/pull/429 | mlsum | Hi @RachelKer :)
Thanks for adding MLSUM !
To fix the CI I think you just have to rebase from master | Hello,
The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https://gitlab.lip6.fr/scialom/mlsum_data | 21 | text: mlsum
Hello,
The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https://gitlab.lip6.fr/scialom/mlsum_data
Hi @RachelKer :)
Thanks for adding MLSUM !
To fix the CI I think you just have to rebase from master |
https://github.com/huggingface/datasets/pull/429 | mlsum | It looks like your PR does tons of changes in other datasets.
Maybe this is because of the merge from master ? | Hello,
The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https://gitlab.lip6.fr/scialom/mlsum_data | 22 | text: mlsum
Hello,
The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https://gitlab.lip6.fr/scialom/mlsum_data
It looks like your PR does tons of changes in other datasets.
Maybe this is because of the merge from master ? |
https://github.com/huggingface/datasets/pull/429 | mlsum | Hmm, I see, sorry I messed up somewhere. Maybe it's easier if we close the pull request and I do another one ? | Hello,
The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https://gitlab.lip6.fr/scialom/mlsum_data | 23 | text: mlsum
Hello,
The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https://gitlab.lip6.fr/scialom/mlsum_data
Hmm, I see, sorry I messed up somewhere. Maybe it's easier if we close the pull request and I do another one ? |
https://github.com/huggingface/datasets/pull/423 | Change features vs schema logic | I had to make `SplitDict` serializable to be able to copy `DatasetInfo` objects properly.
Serialization was also asked in #389 | ## New logic for `nlp.Features` in datasets
Previously, it was confusing to have `features` and pyarrow's `schema` in `nlp.Dataset`.
However `features` is supposed to be the front-facing object to define the different fields of a dataset, while `schema` is only used to write arrow files.
Changes:
- Remove `schema` field in `nlp.Dataset`
- Make `features` the source of truth to read/write examples
- `features` can no longer be `None` in `nlp.Dataset`
- Update `features` after each dataset transform such as `nlp.Dataset.map`
Todo: change the tests to take these changes into account | 20 | text: Change features vs schema logic
## New logic for `nlp.Features` in datasets
Previously, it was confusing to have `features` and pyarrow's `schema` in `nlp.Dataset`.
However `features` is supposed to be the front-facing object to define the different fields of a dataset, while `schema` is only used to write arrow files.
Changes:
- Remove `schema` field in `nlp.Dataset`
- Make `features` the source of truth to read/write examples
- `features` can no longer be `None` in `nlp.Dataset`
- Update `features` after each dataset transform such as `nlp.Dataset.map`
Todo: change the tests to take these changes into account
I had to make `SplitDict` serializable to be able to copy `DatasetInfo` objects properly.
Serialization was also asked in #389 |
https://github.com/huggingface/datasets/pull/423 | Change features vs schema logic | One thing I forgot to say here, is that we also want to use the features arguments of `load_dataset` (which goes in the builder’s config) to override the default features of a dataset script. | ## New logic for `nlp.Features` in datasets
Previously, it was confusing to have `features` and pyarrow's `schema` in `nlp.Dataset`.
However `features` is supposed to be the front-facing object to define the different fields of a dataset, while `schema` is only used to write arrow files.
Changes:
- Remove `schema` field in `nlp.Dataset`
- Make `features` the source of truth to read/write examples
- `features` can no longer be `None` in `nlp.Dataset`
- Update `features` after each dataset transform such as `nlp.Dataset.map`
Todo: change the tests to take these changes into account | 34 | text: Change features vs schema logic
## New logic for `nlp.Features` in datasets
Previously, it was confusing to have `features` and pyarrow's `schema` in `nlp.Dataset`.
However `features` is supposed to be the front-facing object to define the different fields of a dataset, while `schema` is only used to write arrow files.
Changes:
- Remove `schema` field in `nlp.Dataset`
- Make `features` the source of truth to read/write examples
- `features` can no longer be `None` in `nlp.Dataset`
- Update `features` after each dataset transform such as `nlp.Dataset.map`
Todo: change the tests to take these changes into account
One thing I forgot to say here, is that we also want to use the features arguments of `load_dataset` (which goes in the builder’s config) to override the default features of a dataset script. |
https://github.com/huggingface/datasets/pull/421 | Style change | Oh this is the PR where I ran make quality and make style and some previous files from master were changed | make quality and make style ran on scripts | 21 | text: Style change
make quality and make style ran on scripts
Oh this is the PR where I ran make quality and make style and some previous files from master were changed |
https://github.com/huggingface/datasets/pull/416 | Fix xtreme panx directory | great, I think I did not download the data the way you do, but yours is more reasonable. | Fix #412 | 18 | text: Fix xtreme panx directory
Fix #412
great, I think I did not download the data the way you do, but yours is more reasonable. |
https://github.com/huggingface/datasets/pull/398 | Add inline links | Do you mind adding a link to the much more extended pages on adding and sharing a dataset in the new documentation? | Add inline links to `Contributing.md` | 22 | text: Add inline links
Add inline links to `Contributing.md`
Do you mind adding a link to the much more extended pages on adding and sharing a dataset in the new documentation? |
https://github.com/huggingface/datasets/pull/390 | Concatenate datasets | Looks cool :)
I feel like
```python
concatenated_dataset = dataset1.concatenate(dataset2)
```
could be more natural. What do you think ?
Also could you also concatenate the `nlp.Dataset._data_files` ?
```python
return cls(table, info=info, split=split, data_files=self._data_files + other_dataset._data_files)
``` | I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema.
This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions.
Usage:
```python
from nlp import Dataset, load_dataset
data1, data2 = {"id": [0, 1, 2]}, {"id": [3, 4, 5]}
dset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2)
dset_concat = Dataset.from_concat([dset1, dset2])
print(dset_concat)
# Dataset(schema: {'id': 'int64'}, num_rows: 6)
``` | 37 | text: Concatenate datasets
I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema.
This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions.
Usage:
```python
from nlp import Dataset, load_dataset
data1, data2 = {"id": [0, 1, 2]}, {"id": [3, 4, 5]}
dset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2)
dset_concat = Dataset.from_concat([dset1, dset2])
print(dset_concat)
# Dataset(schema: {'id': 'int64'}, num_rows: 6)
```
Looks cool :)
I feel like
```python
concatenated_dataset = dataset1.concatenate(dataset2)
```
could be more natural. What do you think ?
Also could you also concatenate the `nlp.Dataset._data_files` ?
```python
return cls(table, info=info, split=split, data_files=self._data_files + other_dataset._data_files)
``` |
https://github.com/huggingface/datasets/pull/390 | Concatenate datasets | I feel like "WikiBooks" would be a multi task dataset that could fit in the #217 discussion.
Not sure concatenate should be the solution for a multi task dataset. | I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema.
This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions.
Usage:
```python
from nlp import Dataset, load_dataset
data1, data2 = {"id": [0, 1, 2]}, {"id": [3, 4, 5]}
dset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2)
dset_concat = Dataset.from_concat([dset1, dset2])
print(dset_concat)
# Dataset(schema: {'id': 'int64'}, num_rows: 6)
``` | 29 | text: Concatenate datasets
I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema.
This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions.
Usage:
```python
from nlp import Dataset, load_dataset
data1, data2 = {"id": [0, 1, 2]}, {"id": [3, 4, 5]}
dset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2)
dset_concat = Dataset.from_concat([dset1, dset2])
print(dset_concat)
# Dataset(schema: {'id': 'int64'}, num_rows: 6)
```
I feel like "WikiBooks" would be a multi task dataset that could fit in the #217 discussion.
Not sure concatenate should be the solution for a multi task dataset. |