html_url
stringlengths 47
49
| title
stringlengths 4
111
| comments
stringlengths 71
20.4k
| body
stringlengths 0
12.9k
โ | comment_length_in_words
int64 16
1.61k
| text
stringlengths 100
20.5k
|
---|---|---|---|---|---|
https://github.com/huggingface/datasets/pull/2198 | added file_permission in load_dataset | From offline discussions: we want to make the permissions handling consistent with `transformers`. However from discussion in https://github.com/huggingface/transformers/pull/11119 it looks like it might not be a good solution to provide this argument. Users should use umask for now, and we'll see how things evolve.
@bhavitvyamalik I'm closing the PR for now if you don't mind | As discussed in #2065 I've added `file_permission` argument in `load_dataset`.
Added mainly 2 things here:
1) Permission of downloaded datasets when converted to .arrow files can be changed with argument `file_permission` argument in `load_dataset` (default is 0o644 only)
2) Incase the user uses `map` later on to generate another cache file of dataset, it ensures the permissions of newly generated file are similar to that of` *-train.arrow` file inside cache_dir for that dataset. | 55 | text: added file_permission in load_dataset
As discussed in #2065 I've added `file_permission` argument in `load_dataset`.
Added mainly 2 things here:
1) Permission of downloaded datasets when converted to .arrow files can be changed with argument `file_permission` argument in `load_dataset` (default is 0o644 only)
2) Incase the user uses `map` later on to generate another cache file of dataset, it ensures the permissions of newly generated file are similar to that of` *-train.arrow` file inside cache_dir for that dataset.
From offline discussions: we want to make the permissions handling consistent with `transformers`. However from discussion in https://github.com/huggingface/transformers/pull/11119 it looks like it might not be a good solution to provide this argument. Users should use umask for now, and we'll see how things evolve.
@bhavitvyamalik I'm closing the PR for now if you don't mind |
https://github.com/huggingface/datasets/pull/2191 | Refactorize tests to use Dataset as context manager | I find very interesting that idea of using a fixture instead!
Let me rework a little bit this PR, @lhoestq. | Refactorize Dataset tests to use Dataset as context manager. | 20 | text: Refactorize tests to use Dataset as context manager
Refactorize Dataset tests to use Dataset as context manager.
I find very interesting that idea of using a fixture instead!
Let me rework a little bit this PR, @lhoestq. |
https://github.com/huggingface/datasets/pull/2191 | Refactorize tests to use Dataset as context manager | @lhoestq, as this is a big refactoring, I had many problems to solve the conflicts with the master branch...
Therefore, I think it is better to merge this as it is, and then to make other PRs with additional refactorings, before I get conflicts again with the master branch... | Refactorize Dataset tests to use Dataset as context manager. | 49 | text: Refactorize tests to use Dataset as context manager
Refactorize Dataset tests to use Dataset as context manager.
@lhoestq, as this is a big refactoring, I had many problems to solve the conflicts with the master branch...
Therefore, I think it is better to merge this as it is, and then to make other PRs with additional refactorings, before I get conflicts again with the master branch... |
https://github.com/huggingface/datasets/pull/2191 | Refactorize tests to use Dataset as context manager | There are still some conflicts that prevent merging.
Moreover I noticed that you added one fixture per method of the Dataset object to be mocked. The code of all these fixtures is pretty much the same, feel free to factorize them into one fixture.
Also feel free to create another branch from `master` if you don't want to fix the conflicts of this branch.
Let me know if I can help you on this | Refactorize Dataset tests to use Dataset as context manager. | 74 | text: Refactorize tests to use Dataset as context manager
Refactorize Dataset tests to use Dataset as context manager.
There are still some conflicts that prevent merging.
Moreover I noticed that you added one fixture per method of the Dataset object to be mocked. The code of all these fixtures is pretty much the same, feel free to factorize them into one fixture.
Also feel free to create another branch from `master` if you don't want to fix the conflicts of this branch.
Let me know if I can help you on this |
https://github.com/huggingface/datasets/pull/2191 | Refactorize tests to use Dataset as context manager | @lhoestq, yes, the new conflicts appeared after today merge commits on master...
I am definitely going to split this PR into smaller ones in order to avoid having to resolve many conflicts after each commit on master. There are lots of conflicts and these are painful to resolve. | Refactorize Dataset tests to use Dataset as context manager. | 48 | text: Refactorize tests to use Dataset as context manager
Refactorize Dataset tests to use Dataset as context manager.
@lhoestq, yes, the new conflicts appeared after today merge commits on master...
I am definitely going to split this PR into smaller ones in order to avoid having to resolve many conflicts after each commit on master. There are lots of conflicts and these are painful to resolve. |
https://github.com/huggingface/datasets/pull/2182 | Set default in-memory value depending on the dataset size | TODO:
- [x] Add a section in the docs about this.
- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~ | Set a default value for `in_memory` depending on the size of the dataset to be loaded.
Close #2179.
TODO:
- [x] Add a section in the docs about this.
- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~ | 44 | text: Set default in-memory value depending on the dataset size
Set a default value for `in_memory` depending on the size of the dataset to be loaded.
Close #2179.
TODO:
- [x] Add a section in the docs about this.
- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~
TODO:
- [x] Add a section in the docs about this.
- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~ |
https://github.com/huggingface/datasets/pull/2182 | Set default in-memory value depending on the dataset size | @lhoestq I have a question, regarding:
> Also maybe we should add a warning if someone tries to specify cache_file_name= in map, filter etc. on a dataset that is in memory, since the computation is not going to be cached in this case.
- It might be the case that the user has an in-memory dataset and might want to use `map` and cache it, by passing `cache_file_name=`
- This is indeed allowed by the library and works as expected: the dataset is cached.
Why adding a warning? | Set a default value for `in_memory` depending on the size of the dataset to be loaded.
Close #2179.
TODO:
- [x] Add a section in the docs about this.
- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~ | 88 | text: Set default in-memory value depending on the dataset size
Set a default value for `in_memory` depending on the size of the dataset to be loaded.
Close #2179.
TODO:
- [x] Add a section in the docs about this.
- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~
@lhoestq I have a question, regarding:
> Also maybe we should add a warning if someone tries to specify cache_file_name= in map, filter etc. on a dataset that is in memory, since the computation is not going to be cached in this case.
- It might be the case that the user has an in-memory dataset and might want to use `map` and cache it, by passing `cache_file_name=`
- This is indeed allowed by the library and works as expected: the dataset is cached.
Why adding a warning? |
https://github.com/huggingface/datasets/pull/2182 | Set default in-memory value depending on the dataset size | Yes right, I meant if `load_from_cache_file` is set to True and `cache_file_name ` is None. my bad :p | Set a default value for `in_memory` depending on the size of the dataset to be loaded.
Close #2179.
TODO:
- [x] Add a section in the docs about this.
- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~ | 18 | text: Set default in-memory value depending on the dataset size
Set a default value for `in_memory` depending on the size of the dataset to be loaded.
Close #2179.
TODO:
- [x] Add a section in the docs about this.
- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~
Yes right, I meant if `load_from_cache_file` is set to True and `cache_file_name ` is None. my bad :p |
https://github.com/huggingface/datasets/pull/2178 | Fix cast memory usage by using map on subtables | I updated the bleurt mocking method and bleurt test is passing now.
I also ran the slow tests and they are passing for bleurt. | The `cast` operation on a pyarrow Table may create new arrays in memory.
This is an issue since users expect memory mapped datasets to not fill up the RAM.
To fix that I used `map` to write a new arrow file on disk when cast is used.
To make things more convenient I introduced the `arrow` formatting of a dataset, to make it return pyarrow tables instead of python dicts. This way one can use pyarrow transforms directly when using `map`.
edit: we'll use the same mechanism for `filter` | 24 | text: Fix cast memory usage by using map on subtables
The `cast` operation on a pyarrow Table may create new arrays in memory.
This is an issue since users expect memory mapped datasets to not fill up the RAM.
To fix that I used `map` to write a new arrow file on disk when cast is used.
To make things more convenient I introduced the `arrow` formatting of a dataset, to make it return pyarrow tables instead of python dicts. This way one can use pyarrow transforms directly when using `map`.
edit: we'll use the same mechanism for `filter`
I updated the bleurt mocking method and bleurt test is passing now.
I also ran the slow tests and they are passing for bleurt. |
https://github.com/huggingface/datasets/pull/2171 | Fixed the link to wikiauto training data. | Also you can ignore the CI failing on `docs`, this has been fixed on master :) | 16 | text: Fixed the link to wikiauto training data.
Also you can ignore the CI failing on `docs`, this has been fixed on master :) |
|
https://github.com/huggingface/datasets/pull/2171 | Fixed the link to wikiauto training data. | @lhoestq I need to update other stuff on GEM later today too, so will merge this one and remove columns in the next PR! | 24 | text: Fixed the link to wikiauto training data.
@lhoestq I need to update other stuff on GEM later today too, so will merge this one and remove columns in the next PR! |
|
https://github.com/huggingface/datasets/pull/2169 | Updated WER metric implementation to avoid memory issues | Hi ! Thanks for suggesting this fix
Unfortunately it looks like it's already been fixed by #2111
Feel free to share your thoughts about this PR !
I'm closing this one if you don't mind. | This is in order to fix this issue:
https://github.com/huggingface/datasets/issues/2078
| 35 | text: Updated WER metric implementation to avoid memory issues
This is in order to fix this issue:
https://github.com/huggingface/datasets/issues/2078
Hi ! Thanks for suggesting this fix
Unfortunately it looks like it's already been fixed by #2111
Feel free to share your thoughts about this PR !
I'm closing this one if you don't mind. |
https://github.com/huggingface/datasets/pull/2168 | Preserve split type when realoding dataset | Thanks for diving into this !
Before going further, I just want to make sure if using `eval` is the right solution
Personally I'm not a big fan of `eval` since it has many security concerns. Also storing string representations of python objects in the json files is not ideal either IMO, so maybe it's possible to change this aspect instead.
Maybe it would be better to convert the `_RelativeInstruction` to a string (or "specs") ?
It looks like `ReadInstruction.from_spec` already exists, but not the other way around.
The specs are the string representation of instructions. For example: `train+validation[:50%]`.
Let me know what you think ! And thanks again, this issue has been here for a while now ^^ | Fixes #2167
Using `eval` is not ideal for security reasons (in web apps I assume), but without it the code would be much more complex IMO.
In terms of style, instead of explicitly importing a private member (`_RelativeInstruction`), we can add these imports at the top of the module:
```python
from . import arrow_reader # gives us access to ReadInstruction and _RelativeInstruction
from . import splits # gives us access to NamedSplit
```
and then define the `eval` globals as follows:
```python
{**arrow_reader.__dict__, **splits.__dict__}
```
| 119 | text: Preserve split type when realoding dataset
Fixes #2167
Using `eval` is not ideal for security reasons (in web apps I assume), but without it the code would be much more complex IMO.
In terms of style, instead of explicitly importing a private member (`_RelativeInstruction`), we can add these imports at the top of the module:
```python
from . import arrow_reader # gives us access to ReadInstruction and _RelativeInstruction
from . import splits # gives us access to NamedSplit
```
and then define the `eval` globals as follows:
```python
{**arrow_reader.__dict__, **splits.__dict__}
```
Thanks for diving into this !
Before going further, I just want to make sure if using `eval` is the right solution
Personally I'm not a big fan of `eval` since it has many security concerns. Also storing string representations of python objects in the json files is not ideal either IMO, so maybe it's possible to change this aspect instead.
Maybe it would be better to convert the `_RelativeInstruction` to a string (or "specs") ?
It looks like `ReadInstruction.from_spec` already exists, but not the other way around.
The specs are the string representation of instructions. For example: `train+validation[:50%]`.
Let me know what you think ! And thanks again, this issue has been here for a while now ^^ |
https://github.com/huggingface/datasets/pull/2168 | Preserve split type when realoding dataset | @lhoestq Yes, before going with `eval`, I thought about this approach with the "spec". The only issue with this approach is that we have to come up with a represenation for the `rounding` arg.
What do you think about this (maybe too verbose)?
```python
>>> print(ReadInstruction("train", rounding="pct1_dropremainder", from_=10, to=30).to_spec())
train[10:30](pct1_dropremainder) | Fixes #2167
Using `eval` is not ideal for security reasons (in web apps I assume), but without it the code would be much more complex IMO.
In terms of style, instead of explicitly importing a private member (`_RelativeInstruction`), we can add these imports at the top of the module:
```python
from . import arrow_reader # gives us access to ReadInstruction and _RelativeInstruction
from . import splits # gives us access to NamedSplit
```
and then define the `eval` globals as follows:
```python
{**arrow_reader.__dict__, **splits.__dict__}
```
| 50 | text: Preserve split type when realoding dataset
Fixes #2167
Using `eval` is not ideal for security reasons (in web apps I assume), but without it the code would be much more complex IMO.
In terms of style, instead of explicitly importing a private member (`_RelativeInstruction`), we can add these imports at the top of the module:
```python
from . import arrow_reader # gives us access to ReadInstruction and _RelativeInstruction
from . import splits # gives us access to NamedSplit
```
and then define the `eval` globals as follows:
```python
{**arrow_reader.__dict__, **splits.__dict__}
```
@lhoestq Yes, before going with `eval`, I thought about this approach with the "spec". The only issue with this approach is that we have to come up with a represenation for the `rounding` arg.
What do you think about this (maybe too verbose)?
```python
>>> print(ReadInstruction("train", rounding="pct1_dropremainder", from_=10, to=30).to_spec())
train[10:30](pct1_dropremainder) |
https://github.com/huggingface/datasets/pull/2168 | Preserve split type when realoding dataset | Good idea !
First we must note that the rounding is only used for percentage instructions.
For absolute instructions there's no rounding ambiguity.
By default the rounding is set to `closest`. For example if you have a train set of 999 examples and if you provide an instruction spec `"train[:1%]"`, you're going to get the first ten examples (while the `pct1_dropremainder ` rounding would return 9 examples).
Currently there's no way to get an instruction with a `pct1_dropremainder` rounding strategy from an instruction spec.
So we can either drop the support of `pct1_dropremainder` or define a way to use this strategy from a spec.
I don't think dropping `pct1_dropremainder` would be a good idea since it allows to load each percent to all have the same number of examples (even the last one). Therefore I think your suggestion makes total sense and we should add a representation of this rounding strategy.
I like what you suggested `train[10%:30%](pct1_dropremainder)` is fine, and it seems compatible with the regex that parses the instructions specs. | Fixes #2167
Using `eval` is not ideal for security reasons (in web apps I assume), but without it the code would be much more complex IMO.
In terms of style, instead of explicitly importing a private member (`_RelativeInstruction`), we can add these imports at the top of the module:
```python
from . import arrow_reader # gives us access to ReadInstruction and _RelativeInstruction
from . import splits # gives us access to NamedSplit
```
and then define the `eval` globals as follows:
```python
{**arrow_reader.__dict__, **splits.__dict__}
```
| 171 | text: Preserve split type when realoding dataset
Fixes #2167
Using `eval` is not ideal for security reasons (in web apps I assume), but without it the code would be much more complex IMO.
In terms of style, instead of explicitly importing a private member (`_RelativeInstruction`), we can add these imports at the top of the module:
```python
from . import arrow_reader # gives us access to ReadInstruction and _RelativeInstruction
from . import splits # gives us access to NamedSplit
```
and then define the `eval` globals as follows:
```python
{**arrow_reader.__dict__, **splits.__dict__}
```
Good idea !
First we must note that the rounding is only used for percentage instructions.
For absolute instructions there's no rounding ambiguity.
By default the rounding is set to `closest`. For example if you have a train set of 999 examples and if you provide an instruction spec `"train[:1%]"`, you're going to get the first ten examples (while the `pct1_dropremainder ` rounding would return 9 examples).
Currently there's no way to get an instruction with a `pct1_dropremainder` rounding strategy from an instruction spec.
So we can either drop the support of `pct1_dropremainder` or define a way to use this strategy from a spec.
I don't think dropping `pct1_dropremainder` would be a good idea since it allows to load each percent to all have the same number of examples (even the last one). Therefore I think your suggestion makes total sense and we should add a representation of this rounding strategy.
I like what you suggested `train[10%:30%](pct1_dropremainder)` is fine, and it seems compatible with the regex that parses the instructions specs. |
https://github.com/huggingface/datasets/pull/2168 | Preserve split type when realoding dataset | @lhoestq I've added a test and addressed the comments.
Additionally, `ReadInstruction` is converted to its spec form in `builder.py` to avoid a circular import that would happen if this logic was in `arrow_reader.py`. If you think it's better to have this logic in `arrow_reader.py`, the import can be delayed by putting it inside a function. | Fixes #2167
Using `eval` is not ideal for security reasons (in web apps I assume), but without it the code would be much more complex IMO.
In terms of style, instead of explicitly importing a private member (`_RelativeInstruction`), we can add these imports at the top of the module:
```python
from . import arrow_reader # gives us access to ReadInstruction and _RelativeInstruction
from . import splits # gives us access to NamedSplit
```
and then define the `eval` globals as follows:
```python
{**arrow_reader.__dict__, **splits.__dict__}
```
| 55 | text: Preserve split type when realoding dataset
Fixes #2167
Using `eval` is not ideal for security reasons (in web apps I assume), but without it the code would be much more complex IMO.
In terms of style, instead of explicitly importing a private member (`_RelativeInstruction`), we can add these imports at the top of the module:
```python
from . import arrow_reader # gives us access to ReadInstruction and _RelativeInstruction
from . import splits # gives us access to NamedSplit
```
and then define the `eval` globals as follows:
```python
{**arrow_reader.__dict__, **splits.__dict__}
```
@lhoestq I've added a test and addressed the comments.
Additionally, `ReadInstruction` is converted to its spec form in `builder.py` to avoid a circular import that would happen if this logic was in `arrow_reader.py`. If you think it's better to have this logic in `arrow_reader.py`, the import can be delayed by putting it inside a function. |
https://github.com/huggingface/datasets/pull/2163 | Concat only unique fields in DatasetInfo.from_merge | Hi @mariosasko,
Just came across this PR and I was wondering if we can use
`description = "\n\n".join(OrderedDict.fromkeys([info.description for info in dataset_infos]))`
This will obviate the need for `unique` and is almost as fast as `set`. We could have used `dict` inplace of `OrderedDict` but it's available 3.7+ onwards | I thought someone from the community with less experience would be interested in fixing this issue, but that wasn't the case.
Fixes #2103 | 49 | text: Concat only unique fields in DatasetInfo.from_merge
I thought someone from the community with less experience would be interested in fixing this issue, but that wasn't the case.
Fixes #2103
Hi @mariosasko,
Just came across this PR and I was wondering if we can use
`description = "\n\n".join(OrderedDict.fromkeys([info.description for info in dataset_infos]))`
This will obviate the need for `unique` and is almost as fast as `set`. We could have used `dict` inplace of `OrderedDict` but it's available 3.7+ onwards |
https://github.com/huggingface/datasets/pull/2163 | Concat only unique fields in DatasetInfo.from_merge | Hi,
let's see what @lhoestq thinks. Although my approach adds more code, it's more readable IMO. | I thought someone from the community with less experience would be interested in fixing this issue, but that wasn't the case.
Fixes #2103 | 16 | text: Concat only unique fields in DatasetInfo.from_merge
I thought someone from the community with less experience would be interested in fixing this issue, but that wasn't the case.
Fixes #2103
Hi,
let's see what @lhoestq thinks. Although my approach adds more code, it's more readable IMO. |
https://github.com/huggingface/datasets/pull/2155 | Add table classes to the documentation | Just note that docstrings injected from PyArrow do not follow the same convention for formatting types in `Args` or `Returns` as we do... Not a big problem, anyway! ๐ | Following #2025 , I added the table classes to the documentation
cc @albertvillanova | 29 | text: Add table classes to the documentation
Following #2025 , I added the table classes to the documentation
cc @albertvillanova
Just note that docstrings injected from PyArrow do not follow the same convention for formatting types in `Args` or `Returns` as we do... Not a big problem, anyway! ๐ |
https://github.com/huggingface/datasets/pull/2151 | Add support for axis in concatenate datasets | @lhoestq I was thinking that the order of the TableBlocks is not relevant, isn't it?
I mean, in order to consolidate _consecutive_ in-memory table blocks, in this case:
```
blocks = [in_memory_1, memory_mapped, in_memory_2]
```
I could reorder the list:
```
blocks = [in_memory_1, in_memory_2, memory_mapped]
```
so that the first 2 can be consolidated into a single one:
```
blocks = [in_memory_3, memory_mapped]
``` | Add support for `axis` (0 or 1) in `concatenate_datasets`.
Close #853. | 65 | text: Add support for axis in concatenate datasets
Add support for `axis` (0 or 1) in `concatenate_datasets`.
Close #853.
@lhoestq I was thinking that the order of the TableBlocks is not relevant, isn't it?
I mean, in order to consolidate _consecutive_ in-memory table blocks, in this case:
```
blocks = [in_memory_1, memory_mapped, in_memory_2]
```
I could reorder the list:
```
blocks = [in_memory_1, in_memory_2, memory_mapped]
```
so that the first 2 can be consolidated into a single one:
```
blocks = [in_memory_3, memory_mapped]
``` |
https://github.com/huggingface/datasets/pull/2151 | Add support for axis in concatenate datasets | I think the order is important, users won't expect the dataset to be "shuffled" when they add a new item | Add support for `axis` (0 or 1) in `concatenate_datasets`.
Close #853. | 20 | text: Add support for axis in concatenate datasets
Add support for `axis` (0 or 1) in `concatenate_datasets`.
Close #853.
I think the order is important, users won't expect the dataset to be "shuffled" when they add a new item |
https://github.com/huggingface/datasets/pull/2151 | Add support for axis in concatenate datasets | > I think the order is important, users won't expect the dataset to be "shuffled" when they add a new item
OK, therefore I leave `_consolidate_blocks` as it is, which currently keeps the order of the blocks (no shuffling). | Add support for `axis` (0 or 1) in `concatenate_datasets`.
Close #853. | 39 | text: Add support for axis in concatenate datasets
Add support for `axis` (0 or 1) in `concatenate_datasets`.
Close #853.
> I think the order is important, users won't expect the dataset to be "shuffled" when they add a new item
OK, therefore I leave `_consolidate_blocks` as it is, which currently keeps the order of the blocks (no shuffling). |
https://github.com/huggingface/datasets/pull/2151 | Add support for axis in concatenate datasets | Thank you guys for implementing this. Minor thing I noticed in the [documentation](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.concatenate_datasets): it says "Converts a list of Dataset with **the same schema** into a single Dataset". With the addition of the axis parameter, perhaps this should be reworded, no? | Add support for `axis` (0 or 1) in `concatenate_datasets`.
Close #853. | 41 | text: Add support for axis in concatenate datasets
Add support for `axis` (0 or 1) in `concatenate_datasets`.
Close #853.
Thank you guys for implementing this. Minor thing I noticed in the [documentation](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.concatenate_datasets): it says "Converts a list of Dataset with **the same schema** into a single Dataset". With the addition of the axis parameter, perhaps this should be reworded, no? |
https://github.com/huggingface/datasets/pull/2145 | Implement Dataset add_column | #2274 has been merged. You can now merge master into this branch and use `assert_arrow_metadata_are_synced_with_dataset_features(dset)` to make sure that the metadata are good :) | Implement `Dataset.add_column`.
Close #1954. | 24 | text: Implement Dataset add_column
Implement `Dataset.add_column`.
Close #1954.
#2274 has been merged. You can now merge master into this branch and use `assert_arrow_metadata_are_synced_with_dataset_features(dset)` to make sure that the metadata are good :) |
https://github.com/huggingface/datasets/pull/2141 | added spans field for the wikiann datasets | Hi @lhoestq
Thanks a lot for taking time checking it. I update "dataset_infos.json", I added description to the function of _generate_samples in wikiann.py but I was not sure about the format to write in README. thanks. | Hi @lhoestq
I tried to add spans to the wikiann datasets.
Thanks a lot for kindly having a look.
This addresses https://github.com/huggingface/datasets/issues/2130.
Best regards
Rabeeh | 36 | text: added spans field for the wikiann datasets
Hi @lhoestq
I tried to add spans to the wikiann datasets.
Thanks a lot for kindly having a look.
This addresses https://github.com/huggingface/datasets/issues/2130.
Best regards
Rabeeh
Hi @lhoestq
Thanks a lot for taking time checking it. I update "dataset_infos.json", I added description to the function of _generate_samples in wikiann.py but I was not sure about the format to write in README. thanks. |
https://github.com/huggingface/datasets/pull/2141 | added spans field for the wikiann datasets | Thanks !
For the fields description in the dataset card, something like this does the job:
```
- `tokens`: a `list` of `string` features.
- `langs`: a `list` of `string` features that correspond to the language of each token.
- `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4), `B-LOC` (5), `I-LOC` (6).
- `spans`: a `list` of `string` features, that is the list of named entities in the input text formatted as ``<TAG>: <mention>``
```
Also for information, I think the trailer of rick and morty season 5 is out now :) | Hi @lhoestq
I tried to add spans to the wikiann datasets.
Thanks a lot for kindly having a look.
This addresses https://github.com/huggingface/datasets/issues/2130.
Best regards
Rabeeh | 104 | text: added spans field for the wikiann datasets
Hi @lhoestq
I tried to add spans to the wikiann datasets.
Thanks a lot for kindly having a look.
This addresses https://github.com/huggingface/datasets/issues/2130.
Best regards
Rabeeh
Thanks !
For the fields description in the dataset card, something like this does the job:
```
- `tokens`: a `list` of `string` features.
- `langs`: a `list` of `string` features that correspond to the language of each token.
- `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4), `B-LOC` (5), `I-LOC` (6).
- `spans`: a `list` of `string` features, that is the list of named entities in the input text formatted as ``<TAG>: <mention>``
```
Also for information, I think the trailer of rick and morty season 5 is out now :) |
https://github.com/huggingface/datasets/pull/2141 | added spans field for the wikiann datasets | Hi @lhoestq
thank you! This is updated now, please feel free to let me know if I need to modify something :) thanks | Hi @lhoestq
I tried to add spans to the wikiann datasets.
Thanks a lot for kindly having a look.
This addresses https://github.com/huggingface/datasets/issues/2130.
Best regards
Rabeeh | 23 | text: added spans field for the wikiann datasets
Hi @lhoestq
I tried to add spans to the wikiann datasets.
Thanks a lot for kindly having a look.
This addresses https://github.com/huggingface/datasets/issues/2130.
Best regards
Rabeeh
Hi @lhoestq
thank you! This is updated now, please feel free to let me know if I need to modify something :) thanks |
https://github.com/huggingface/datasets/pull/2121 | Add Validation For README | Good start! Here are some proposed next steps:
- We want the Class structure to reflect the template - so the parser know what section titles to expect and when something has gone wrong
- As a result, we don't need to parse the table of contents, since it will always be the same
- For each section/subsection it would be cool to have a variable saying whether it's filled out or not (when it's either empty or has `[More Information Needed]`)
- `attributes` should probably be `text` | Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
```json
{
"name": "./datasets/fashion_mnist/README.md",
"attributes": "",
"subsections": [
{
"name": "Dataset Card for FashionMNIST",
"attributes": "",
"subsections": [
{
"name": "Table of Contents",
"attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)",
"subsections": []
},
{
"name": "Dataset Description",
"attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**",
"subsections": [
{
"name": "Dataset Summary",
"attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.",
"subsections": []
},
{
"name": "Supported Tasks and Leaderboards",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Languages",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Dataset Structure",
"attributes": "",
"subsections": [
{
"name": "Data Instances",
"attributes": "A data point comprises an image and its label.",
"subsections": []
},
{
"name": "Data Fields",
"attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |",
"subsections": []
},
{
"name": "Data Splits",
"attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.",
"subsections": []
}
]
},
{
"name": "Dataset Creation",
"attributes": "",
"subsections": [
{
"name": "Curation Rationale",
"attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.",
"subsections": []
},
{
"name": "Source Data",
"attributes": "",
"subsections": [
{
"name": "Initial Data Collection and Normalization",
"attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.",
"subsections": []
},
{
"name": "Who are the source image producers?",
"attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.",
"subsections": []
}
]
},
{
"name": "Annotations",
"attributes": "",
"subsections": [
{
"name": "Annotation process",
"attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.",
"subsections": []
},
{
"name": "Who are the annotators?",
"attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.",
"subsections": []
}
]
},
{
"name": "Personal and Sensitive Information",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Considerations for Using the Data",
"attributes": "",
"subsections": [
{
"name": "Social Impact of Dataset",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Discussion of Biases",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Other Known Limitations",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Additional Information",
"attributes": "",
"subsections": [
{
"name": "Dataset Curators",
"attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf",
"subsections": []
},
{
"name": "Licensing Information",
"attributes": "MIT Licence",
"subsections": []
},
{
"name": "Citation Information",
"attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}",
"subsections": []
},
{
"name": "Contributions",
"attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.",
"subsections": []
}
]
}
]
}
]
}
```
Thanks,
Gunjan | 88 | text: Add Validation For README
Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
```json
{
"name": "./datasets/fashion_mnist/README.md",
"attributes": "",
"subsections": [
{
"name": "Dataset Card for FashionMNIST",
"attributes": "",
"subsections": [
{
"name": "Table of Contents",
"attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)",
"subsections": []
},
{
"name": "Dataset Description",
"attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**",
"subsections": [
{
"name": "Dataset Summary",
"attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.",
"subsections": []
},
{
"name": "Supported Tasks and Leaderboards",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Languages",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Dataset Structure",
"attributes": "",
"subsections": [
{
"name": "Data Instances",
"attributes": "A data point comprises an image and its label.",
"subsections": []
},
{
"name": "Data Fields",
"attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |",
"subsections": []
},
{
"name": "Data Splits",
"attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.",
"subsections": []
}
]
},
{
"name": "Dataset Creation",
"attributes": "",
"subsections": [
{
"name": "Curation Rationale",
"attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.",
"subsections": []
},
{
"name": "Source Data",
"attributes": "",
"subsections": [
{
"name": "Initial Data Collection and Normalization",
"attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.",
"subsections": []
},
{
"name": "Who are the source image producers?",
"attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.",
"subsections": []
}
]
},
{
"name": "Annotations",
"attributes": "",
"subsections": [
{
"name": "Annotation process",
"attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.",
"subsections": []
},
{
"name": "Who are the annotators?",
"attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.",
"subsections": []
}
]
},
{
"name": "Personal and Sensitive Information",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Considerations for Using the Data",
"attributes": "",
"subsections": [
{
"name": "Social Impact of Dataset",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Discussion of Biases",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Other Known Limitations",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Additional Information",
"attributes": "",
"subsections": [
{
"name": "Dataset Curators",
"attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf",
"subsections": []
},
{
"name": "Licensing Information",
"attributes": "MIT Licence",
"subsections": []
},
{
"name": "Citation Information",
"attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}",
"subsections": []
},
{
"name": "Contributions",
"attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.",
"subsections": []
}
]
}
]
}
]
}
```
Thanks,
Gunjan
Good start! Here are some proposed next steps:
- We want the Class structure to reflect the template - so the parser know what section titles to expect and when something has gone wrong
- As a result, we don't need to parse the table of contents, since it will always be the same
- For each section/subsection it would be cool to have a variable saying whether it's filled out or not (when it's either empty or has `[More Information Needed]`)
- `attributes` should probably be `text` |
https://github.com/huggingface/datasets/pull/2121 | Add Validation For README | @yjernite @lhoestq
I have added basic validation checking in the class. It works based on a YAML string. The YAML string determines the expected structure and which text is to be checked. The `text` can be true or false showing whether the text has to be checked or not for emptiness. Similarly, each subsection is parsed recursively. I have used print statement currently so that all issues are shown.
Please let me know your thoughts.
I haven't added a variable that keeps a track of whether the text is empty or not but it can be done easliy if required. | Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
```json
{
"name": "./datasets/fashion_mnist/README.md",
"attributes": "",
"subsections": [
{
"name": "Dataset Card for FashionMNIST",
"attributes": "",
"subsections": [
{
"name": "Table of Contents",
"attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)",
"subsections": []
},
{
"name": "Dataset Description",
"attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**",
"subsections": [
{
"name": "Dataset Summary",
"attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.",
"subsections": []
},
{
"name": "Supported Tasks and Leaderboards",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Languages",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Dataset Structure",
"attributes": "",
"subsections": [
{
"name": "Data Instances",
"attributes": "A data point comprises an image and its label.",
"subsections": []
},
{
"name": "Data Fields",
"attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |",
"subsections": []
},
{
"name": "Data Splits",
"attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.",
"subsections": []
}
]
},
{
"name": "Dataset Creation",
"attributes": "",
"subsections": [
{
"name": "Curation Rationale",
"attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.",
"subsections": []
},
{
"name": "Source Data",
"attributes": "",
"subsections": [
{
"name": "Initial Data Collection and Normalization",
"attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.",
"subsections": []
},
{
"name": "Who are the source image producers?",
"attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.",
"subsections": []
}
]
},
{
"name": "Annotations",
"attributes": "",
"subsections": [
{
"name": "Annotation process",
"attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.",
"subsections": []
},
{
"name": "Who are the annotators?",
"attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.",
"subsections": []
}
]
},
{
"name": "Personal and Sensitive Information",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Considerations for Using the Data",
"attributes": "",
"subsections": [
{
"name": "Social Impact of Dataset",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Discussion of Biases",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Other Known Limitations",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Additional Information",
"attributes": "",
"subsections": [
{
"name": "Dataset Curators",
"attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf",
"subsections": []
},
{
"name": "Licensing Information",
"attributes": "MIT Licence",
"subsections": []
},
{
"name": "Citation Information",
"attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}",
"subsections": []
},
{
"name": "Contributions",
"attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.",
"subsections": []
}
]
}
]
}
]
}
```
Thanks,
Gunjan | 100 | text: Add Validation For README
Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
```json
{
"name": "./datasets/fashion_mnist/README.md",
"attributes": "",
"subsections": [
{
"name": "Dataset Card for FashionMNIST",
"attributes": "",
"subsections": [
{
"name": "Table of Contents",
"attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)",
"subsections": []
},
{
"name": "Dataset Description",
"attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**",
"subsections": [
{
"name": "Dataset Summary",
"attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.",
"subsections": []
},
{
"name": "Supported Tasks and Leaderboards",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Languages",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Dataset Structure",
"attributes": "",
"subsections": [
{
"name": "Data Instances",
"attributes": "A data point comprises an image and its label.",
"subsections": []
},
{
"name": "Data Fields",
"attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |",
"subsections": []
},
{
"name": "Data Splits",
"attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.",
"subsections": []
}
]
},
{
"name": "Dataset Creation",
"attributes": "",
"subsections": [
{
"name": "Curation Rationale",
"attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.",
"subsections": []
},
{
"name": "Source Data",
"attributes": "",
"subsections": [
{
"name": "Initial Data Collection and Normalization",
"attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.",
"subsections": []
},
{
"name": "Who are the source image producers?",
"attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.",
"subsections": []
}
]
},
{
"name": "Annotations",
"attributes": "",
"subsections": [
{
"name": "Annotation process",
"attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.",
"subsections": []
},
{
"name": "Who are the annotators?",
"attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.",
"subsections": []
}
]
},
{
"name": "Personal and Sensitive Information",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Considerations for Using the Data",
"attributes": "",
"subsections": [
{
"name": "Social Impact of Dataset",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Discussion of Biases",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Other Known Limitations",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Additional Information",
"attributes": "",
"subsections": [
{
"name": "Dataset Curators",
"attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf",
"subsections": []
},
{
"name": "Licensing Information",
"attributes": "MIT Licence",
"subsections": []
},
{
"name": "Citation Information",
"attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}",
"subsections": []
},
{
"name": "Contributions",
"attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.",
"subsections": []
}
]
}
]
}
]
}
```
Thanks,
Gunjan
@yjernite @lhoestq
I have added basic validation checking in the class. It works based on a YAML string. The YAML string determines the expected structure and which text is to be checked. The `text` can be true or false showing whether the text has to be checked or not for emptiness. Similarly, each subsection is parsed recursively. I have used print statement currently so that all issues are shown.
Please let me know your thoughts.
I haven't added a variable that keeps a track of whether the text is empty or not but it can be done easliy if required. |
https://github.com/huggingface/datasets/pull/2121 | Add Validation For README | This looks like a good start !
Maybe we can use a field named `allow_empty` instead of `text` ?
Also +1 for keeping track of empty texts
Do you think you can have a way to collect all the validation fails of a readme and then raise an error showing all the failures instead of using print ?
Then we can create a `tests/test_dataset_cards.py` test file to make sure all the readmes of the repo are valid ! | Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
```json
{
"name": "./datasets/fashion_mnist/README.md",
"attributes": "",
"subsections": [
{
"name": "Dataset Card for FashionMNIST",
"attributes": "",
"subsections": [
{
"name": "Table of Contents",
"attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)",
"subsections": []
},
{
"name": "Dataset Description",
"attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**",
"subsections": [
{
"name": "Dataset Summary",
"attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.",
"subsections": []
},
{
"name": "Supported Tasks and Leaderboards",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Languages",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Dataset Structure",
"attributes": "",
"subsections": [
{
"name": "Data Instances",
"attributes": "A data point comprises an image and its label.",
"subsections": []
},
{
"name": "Data Fields",
"attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |",
"subsections": []
},
{
"name": "Data Splits",
"attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.",
"subsections": []
}
]
},
{
"name": "Dataset Creation",
"attributes": "",
"subsections": [
{
"name": "Curation Rationale",
"attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.",
"subsections": []
},
{
"name": "Source Data",
"attributes": "",
"subsections": [
{
"name": "Initial Data Collection and Normalization",
"attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.",
"subsections": []
},
{
"name": "Who are the source image producers?",
"attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.",
"subsections": []
}
]
},
{
"name": "Annotations",
"attributes": "",
"subsections": [
{
"name": "Annotation process",
"attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.",
"subsections": []
},
{
"name": "Who are the annotators?",
"attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.",
"subsections": []
}
]
},
{
"name": "Personal and Sensitive Information",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Considerations for Using the Data",
"attributes": "",
"subsections": [
{
"name": "Social Impact of Dataset",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Discussion of Biases",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Other Known Limitations",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Additional Information",
"attributes": "",
"subsections": [
{
"name": "Dataset Curators",
"attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf",
"subsections": []
},
{
"name": "Licensing Information",
"attributes": "MIT Licence",
"subsections": []
},
{
"name": "Citation Information",
"attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}",
"subsections": []
},
{
"name": "Contributions",
"attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.",
"subsections": []
}
]
}
]
}
]
}
```
Thanks,
Gunjan | 78 | text: Add Validation For README
Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
```json
{
"name": "./datasets/fashion_mnist/README.md",
"attributes": "",
"subsections": [
{
"name": "Dataset Card for FashionMNIST",
"attributes": "",
"subsections": [
{
"name": "Table of Contents",
"attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)",
"subsections": []
},
{
"name": "Dataset Description",
"attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**",
"subsections": [
{
"name": "Dataset Summary",
"attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.",
"subsections": []
},
{
"name": "Supported Tasks and Leaderboards",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Languages",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Dataset Structure",
"attributes": "",
"subsections": [
{
"name": "Data Instances",
"attributes": "A data point comprises an image and its label.",
"subsections": []
},
{
"name": "Data Fields",
"attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |",
"subsections": []
},
{
"name": "Data Splits",
"attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.",
"subsections": []
}
]
},
{
"name": "Dataset Creation",
"attributes": "",
"subsections": [
{
"name": "Curation Rationale",
"attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.",
"subsections": []
},
{
"name": "Source Data",
"attributes": "",
"subsections": [
{
"name": "Initial Data Collection and Normalization",
"attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.",
"subsections": []
},
{
"name": "Who are the source image producers?",
"attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.",
"subsections": []
}
]
},
{
"name": "Annotations",
"attributes": "",
"subsections": [
{
"name": "Annotation process",
"attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.",
"subsections": []
},
{
"name": "Who are the annotators?",
"attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.",
"subsections": []
}
]
},
{
"name": "Personal and Sensitive Information",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Considerations for Using the Data",
"attributes": "",
"subsections": [
{
"name": "Social Impact of Dataset",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Discussion of Biases",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Other Known Limitations",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Additional Information",
"attributes": "",
"subsections": [
{
"name": "Dataset Curators",
"attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf",
"subsections": []
},
{
"name": "Licensing Information",
"attributes": "MIT Licence",
"subsections": []
},
{
"name": "Citation Information",
"attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}",
"subsections": []
},
{
"name": "Contributions",
"attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.",
"subsections": []
}
]
}
]
}
]
}
```
Thanks,
Gunjan
This looks like a good start !
Maybe we can use a field named `allow_empty` instead of `text` ?
Also +1 for keeping track of empty texts
Do you think you can have a way to collect all the validation fails of a readme and then raise an error showing all the failures instead of using print ?
Then we can create a `tests/test_dataset_cards.py` test file to make sure all the readmes of the repo are valid ! |
https://github.com/huggingface/datasets/pull/2121 | Add Validation For README | Hi @lhoestq
I have added changes accordingly. I prepared a list which stores all the errors and raises them at the end. I'm not sure if there is a better way. | Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
```json
{
"name": "./datasets/fashion_mnist/README.md",
"attributes": "",
"subsections": [
{
"name": "Dataset Card for FashionMNIST",
"attributes": "",
"subsections": [
{
"name": "Table of Contents",
"attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)",
"subsections": []
},
{
"name": "Dataset Description",
"attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**",
"subsections": [
{
"name": "Dataset Summary",
"attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.",
"subsections": []
},
{
"name": "Supported Tasks and Leaderboards",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Languages",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Dataset Structure",
"attributes": "",
"subsections": [
{
"name": "Data Instances",
"attributes": "A data point comprises an image and its label.",
"subsections": []
},
{
"name": "Data Fields",
"attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |",
"subsections": []
},
{
"name": "Data Splits",
"attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.",
"subsections": []
}
]
},
{
"name": "Dataset Creation",
"attributes": "",
"subsections": [
{
"name": "Curation Rationale",
"attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.",
"subsections": []
},
{
"name": "Source Data",
"attributes": "",
"subsections": [
{
"name": "Initial Data Collection and Normalization",
"attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.",
"subsections": []
},
{
"name": "Who are the source image producers?",
"attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.",
"subsections": []
}
]
},
{
"name": "Annotations",
"attributes": "",
"subsections": [
{
"name": "Annotation process",
"attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.",
"subsections": []
},
{
"name": "Who are the annotators?",
"attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.",
"subsections": []
}
]
},
{
"name": "Personal and Sensitive Information",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Considerations for Using the Data",
"attributes": "",
"subsections": [
{
"name": "Social Impact of Dataset",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Discussion of Biases",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Other Known Limitations",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Additional Information",
"attributes": "",
"subsections": [
{
"name": "Dataset Curators",
"attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf",
"subsections": []
},
{
"name": "Licensing Information",
"attributes": "MIT Licence",
"subsections": []
},
{
"name": "Citation Information",
"attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}",
"subsections": []
},
{
"name": "Contributions",
"attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.",
"subsections": []
}
]
}
]
}
]
}
```
Thanks,
Gunjan | 31 | text: Add Validation For README
Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
```json
{
"name": "./datasets/fashion_mnist/README.md",
"attributes": "",
"subsections": [
{
"name": "Dataset Card for FashionMNIST",
"attributes": "",
"subsections": [
{
"name": "Table of Contents",
"attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)",
"subsections": []
},
{
"name": "Dataset Description",
"attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**",
"subsections": [
{
"name": "Dataset Summary",
"attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.",
"subsections": []
},
{
"name": "Supported Tasks and Leaderboards",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Languages",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Dataset Structure",
"attributes": "",
"subsections": [
{
"name": "Data Instances",
"attributes": "A data point comprises an image and its label.",
"subsections": []
},
{
"name": "Data Fields",
"attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |",
"subsections": []
},
{
"name": "Data Splits",
"attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.",
"subsections": []
}
]
},
{
"name": "Dataset Creation",
"attributes": "",
"subsections": [
{
"name": "Curation Rationale",
"attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.",
"subsections": []
},
{
"name": "Source Data",
"attributes": "",
"subsections": [
{
"name": "Initial Data Collection and Normalization",
"attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.",
"subsections": []
},
{
"name": "Who are the source image producers?",
"attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.",
"subsections": []
}
]
},
{
"name": "Annotations",
"attributes": "",
"subsections": [
{
"name": "Annotation process",
"attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.",
"subsections": []
},
{
"name": "Who are the annotators?",
"attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.",
"subsections": []
}
]
},
{
"name": "Personal and Sensitive Information",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Considerations for Using the Data",
"attributes": "",
"subsections": [
{
"name": "Social Impact of Dataset",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Discussion of Biases",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Other Known Limitations",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Additional Information",
"attributes": "",
"subsections": [
{
"name": "Dataset Curators",
"attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf",
"subsections": []
},
{
"name": "Licensing Information",
"attributes": "MIT Licence",
"subsections": []
},
{
"name": "Citation Information",
"attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}",
"subsections": []
},
{
"name": "Contributions",
"attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.",
"subsections": []
}
]
}
]
}
]
}
```
Thanks,
Gunjan
Hi @lhoestq
I have added changes accordingly. I prepared a list which stores all the errors and raises them at the end. I'm not sure if there is a better way. |
https://github.com/huggingface/datasets/pull/2121 | Add Validation For README | Hi @lhoestq
I have added some basic tests, also have restructured `ReadMe` class slightly.
There is one print statement currently, I'm not sure how to remove it. Basically, I want to warn but not stop further validation. I can't append to a list because the `error_list` and `warning_list` are both only present in `validate` method, and this print is present in the `parse` method. This is done when someone has repeated a section multiple times. For e.g.:
```markdown
---
---
# Dataset Card for FashionMNIST
## Dataset Description
## Dataset Description
```
In this case, I check for validation only in the latest entry.
I can also raise an error (ideal case scenario), but still, it is in the `parse`. Should I add `error_lines` and `warning_lines` as instance variables? That would probably solve the issue.
In tests, I'm using a dummy YAML string for structure, we can also make it into a file but I feel that is not a hard requirement. Let me know your thoughts.
I will add tests for `from_readme` as well.
However, I would love to be able to check the exact message in the test when an error is raised. I checked a couple of methods but couldn't get it working. Let me know if you're aware of a way to do that. | Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
```json
{
"name": "./datasets/fashion_mnist/README.md",
"attributes": "",
"subsections": [
{
"name": "Dataset Card for FashionMNIST",
"attributes": "",
"subsections": [
{
"name": "Table of Contents",
"attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)",
"subsections": []
},
{
"name": "Dataset Description",
"attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**",
"subsections": [
{
"name": "Dataset Summary",
"attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.",
"subsections": []
},
{
"name": "Supported Tasks and Leaderboards",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Languages",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Dataset Structure",
"attributes": "",
"subsections": [
{
"name": "Data Instances",
"attributes": "A data point comprises an image and its label.",
"subsections": []
},
{
"name": "Data Fields",
"attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |",
"subsections": []
},
{
"name": "Data Splits",
"attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.",
"subsections": []
}
]
},
{
"name": "Dataset Creation",
"attributes": "",
"subsections": [
{
"name": "Curation Rationale",
"attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.",
"subsections": []
},
{
"name": "Source Data",
"attributes": "",
"subsections": [
{
"name": "Initial Data Collection and Normalization",
"attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.",
"subsections": []
},
{
"name": "Who are the source image producers?",
"attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.",
"subsections": []
}
]
},
{
"name": "Annotations",
"attributes": "",
"subsections": [
{
"name": "Annotation process",
"attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.",
"subsections": []
},
{
"name": "Who are the annotators?",
"attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.",
"subsections": []
}
]
},
{
"name": "Personal and Sensitive Information",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Considerations for Using the Data",
"attributes": "",
"subsections": [
{
"name": "Social Impact of Dataset",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Discussion of Biases",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Other Known Limitations",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Additional Information",
"attributes": "",
"subsections": [
{
"name": "Dataset Curators",
"attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf",
"subsections": []
},
{
"name": "Licensing Information",
"attributes": "MIT Licence",
"subsections": []
},
{
"name": "Citation Information",
"attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}",
"subsections": []
},
{
"name": "Contributions",
"attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.",
"subsections": []
}
]
}
]
}
]
}
```
Thanks,
Gunjan | 218 | text: Add Validation For README
Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
```json
{
"name": "./datasets/fashion_mnist/README.md",
"attributes": "",
"subsections": [
{
"name": "Dataset Card for FashionMNIST",
"attributes": "",
"subsections": [
{
"name": "Table of Contents",
"attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)",
"subsections": []
},
{
"name": "Dataset Description",
"attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**",
"subsections": [
{
"name": "Dataset Summary",
"attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.",
"subsections": []
},
{
"name": "Supported Tasks and Leaderboards",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Languages",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Dataset Structure",
"attributes": "",
"subsections": [
{
"name": "Data Instances",
"attributes": "A data point comprises an image and its label.",
"subsections": []
},
{
"name": "Data Fields",
"attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |",
"subsections": []
},
{
"name": "Data Splits",
"attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.",
"subsections": []
}
]
},
{
"name": "Dataset Creation",
"attributes": "",
"subsections": [
{
"name": "Curation Rationale",
"attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.",
"subsections": []
},
{
"name": "Source Data",
"attributes": "",
"subsections": [
{
"name": "Initial Data Collection and Normalization",
"attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.",
"subsections": []
},
{
"name": "Who are the source image producers?",
"attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.",
"subsections": []
}
]
},
{
"name": "Annotations",
"attributes": "",
"subsections": [
{
"name": "Annotation process",
"attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.",
"subsections": []
},
{
"name": "Who are the annotators?",
"attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.",
"subsections": []
}
]
},
{
"name": "Personal and Sensitive Information",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Considerations for Using the Data",
"attributes": "",
"subsections": [
{
"name": "Social Impact of Dataset",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Discussion of Biases",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Other Known Limitations",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Additional Information",
"attributes": "",
"subsections": [
{
"name": "Dataset Curators",
"attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf",
"subsections": []
},
{
"name": "Licensing Information",
"attributes": "MIT Licence",
"subsections": []
},
{
"name": "Citation Information",
"attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}",
"subsections": []
},
{
"name": "Contributions",
"attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.",
"subsections": []
}
]
}
]
}
]
}
```
Thanks,
Gunjan
Hi @lhoestq
I have added some basic tests, also have restructured `ReadMe` class slightly.
There is one print statement currently, I'm not sure how to remove it. Basically, I want to warn but not stop further validation. I can't append to a list because the `error_list` and `warning_list` are both only present in `validate` method, and this print is present in the `parse` method. This is done when someone has repeated a section multiple times. For e.g.:
```markdown
---
---
# Dataset Card for FashionMNIST
## Dataset Description
## Dataset Description
```
In this case, I check for validation only in the latest entry.
I can also raise an error (ideal case scenario), but still, it is in the `parse`. Should I add `error_lines` and `warning_lines` as instance variables? That would probably solve the issue.
In tests, I'm using a dummy YAML string for structure, we can also make it into a file but I feel that is not a hard requirement. Let me know your thoughts.
I will add tests for `from_readme` as well.
However, I would love to be able to check the exact message in the test when an error is raised. I checked a couple of methods but couldn't get it working. Let me know if you're aware of a way to do that. |
https://github.com/huggingface/datasets/pull/2121 | Add Validation For README | Hi @lhoestq
Thanks for merging. :)
Thanks a lot to you and @yjernite for guiding me and helping me out.
Yes, I'll also use the next PR for combining the readme and tags validation. ^_^ | Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
```json
{
"name": "./datasets/fashion_mnist/README.md",
"attributes": "",
"subsections": [
{
"name": "Dataset Card for FashionMNIST",
"attributes": "",
"subsections": [
{
"name": "Table of Contents",
"attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)",
"subsections": []
},
{
"name": "Dataset Description",
"attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**",
"subsections": [
{
"name": "Dataset Summary",
"attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.",
"subsections": []
},
{
"name": "Supported Tasks and Leaderboards",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Languages",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Dataset Structure",
"attributes": "",
"subsections": [
{
"name": "Data Instances",
"attributes": "A data point comprises an image and its label.",
"subsections": []
},
{
"name": "Data Fields",
"attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |",
"subsections": []
},
{
"name": "Data Splits",
"attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.",
"subsections": []
}
]
},
{
"name": "Dataset Creation",
"attributes": "",
"subsections": [
{
"name": "Curation Rationale",
"attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.",
"subsections": []
},
{
"name": "Source Data",
"attributes": "",
"subsections": [
{
"name": "Initial Data Collection and Normalization",
"attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.",
"subsections": []
},
{
"name": "Who are the source image producers?",
"attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.",
"subsections": []
}
]
},
{
"name": "Annotations",
"attributes": "",
"subsections": [
{
"name": "Annotation process",
"attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.",
"subsections": []
},
{
"name": "Who are the annotators?",
"attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.",
"subsections": []
}
]
},
{
"name": "Personal and Sensitive Information",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Considerations for Using the Data",
"attributes": "",
"subsections": [
{
"name": "Social Impact of Dataset",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Discussion of Biases",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Other Known Limitations",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Additional Information",
"attributes": "",
"subsections": [
{
"name": "Dataset Curators",
"attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf",
"subsections": []
},
{
"name": "Licensing Information",
"attributes": "MIT Licence",
"subsections": []
},
{
"name": "Citation Information",
"attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}",
"subsections": []
},
{
"name": "Contributions",
"attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.",
"subsections": []
}
]
}
]
}
]
}
```
Thanks,
Gunjan | 35 | text: Add Validation For README
Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
```json
{
"name": "./datasets/fashion_mnist/README.md",
"attributes": "",
"subsections": [
{
"name": "Dataset Card for FashionMNIST",
"attributes": "",
"subsections": [
{
"name": "Table of Contents",
"attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)",
"subsections": []
},
{
"name": "Dataset Description",
"attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**",
"subsections": [
{
"name": "Dataset Summary",
"attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.",
"subsections": []
},
{
"name": "Supported Tasks and Leaderboards",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Languages",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Dataset Structure",
"attributes": "",
"subsections": [
{
"name": "Data Instances",
"attributes": "A data point comprises an image and its label.",
"subsections": []
},
{
"name": "Data Fields",
"attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |",
"subsections": []
},
{
"name": "Data Splits",
"attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.",
"subsections": []
}
]
},
{
"name": "Dataset Creation",
"attributes": "",
"subsections": [
{
"name": "Curation Rationale",
"attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.",
"subsections": []
},
{
"name": "Source Data",
"attributes": "",
"subsections": [
{
"name": "Initial Data Collection and Normalization",
"attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.",
"subsections": []
},
{
"name": "Who are the source image producers?",
"attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.",
"subsections": []
}
]
},
{
"name": "Annotations",
"attributes": "",
"subsections": [
{
"name": "Annotation process",
"attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.",
"subsections": []
},
{
"name": "Who are the annotators?",
"attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.",
"subsections": []
}
]
},
{
"name": "Personal and Sensitive Information",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Considerations for Using the Data",
"attributes": "",
"subsections": [
{
"name": "Social Impact of Dataset",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Discussion of Biases",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Other Known Limitations",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Additional Information",
"attributes": "",
"subsections": [
{
"name": "Dataset Curators",
"attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf",
"subsections": []
},
{
"name": "Licensing Information",
"attributes": "MIT Licence",
"subsections": []
},
{
"name": "Citation Information",
"attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}",
"subsections": []
},
{
"name": "Contributions",
"attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.",
"subsections": []
}
]
}
]
}
]
}
```
Thanks,
Gunjan
Hi @lhoestq
Thanks for merging. :)
Thanks a lot to you and @yjernite for guiding me and helping me out.
Yes, I'll also use the next PR for combining the readme and tags validation. ^_^ |
https://github.com/huggingface/datasets/pull/2118 | Remove os.environ.copy in Dataset.map | I thought deepcopy on `os.environ` is unsafe (see [this](https://stackoverflow.com/questions/13142972/using-copy-deepcopy-on-os-environ-in-python-appears-broken)), but I can't replicate the behavior described in the linked SO thread.
Closing this one because #2119 has a much cleaner approach. | Replace `os.environ.copy` with in-place modification
Fixes #2115 | 31 | text: Remove os.environ.copy in Dataset.map
Replace `os.environ.copy` with in-place modification
Fixes #2115
I thought deepcopy on `os.environ` is unsafe (see [this](https://stackoverflow.com/questions/13142972/using-copy-deepcopy-on-os-environ-in-python-appears-broken)), but I can't replicate the behavior described in the linked SO thread.
Closing this one because #2119 has a much cleaner approach. |
https://github.com/huggingface/datasets/pull/2114 | Support for legal NLP datasets (EURLEX, ECtHR cases and EU-REG-IR) | > Awesome thank you :)
> This is really cool
>
> I left a few comments.
>
> Also it looks like the dummy data are quite big (100-200KB each). Can you try to reduce their sizes please ? For example I noticed that all the jsonl files inside the `dummy_data.zip` files have 20 lines. Can you only keep 2 lines instead ?
Hi @lhoestq, I did my best to improve the README files, while I also decreased dummy data examples. I included one more legal dataset. | Add support for two legal NLP datasets:
- EURLEX (https://www.aclweb.org/anthology/P19-1636/)
- ECtHR cases (https://arxiv.org/abs/2103.13084)
- EU-REG-IR (https://arxiv.org/abs/2101.10726) | 88 | text: Support for legal NLP datasets (EURLEX, ECtHR cases and EU-REG-IR)
Add support for two legal NLP datasets:
- EURLEX (https://www.aclweb.org/anthology/P19-1636/)
- ECtHR cases (https://arxiv.org/abs/2103.13084)
- EU-REG-IR (https://arxiv.org/abs/2101.10726)
> Awesome thank you :)
> This is really cool
>
> I left a few comments.
>
> Also it looks like the dummy data are quite big (100-200KB each). Can you try to reduce their sizes please ? For example I noticed that all the jsonl files inside the `dummy_data.zip` files have 20 lines. Can you only keep 2 lines instead ?
Hi @lhoestq, I did my best to improve the README files, while I also decreased dummy data examples. I included one more legal dataset. |
https://github.com/huggingface/datasets/pull/2114 | Support for legal NLP datasets (EURLEX, ECtHR cases and EU-REG-IR) | @lhoestq thanks for your review.
I shortened the examples in README files and removed `DEFAULT_CONFIG_BUILDER` from `eu_regulatory_ir.py`. | Add support for two legal NLP datasets:
- EURLEX (https://www.aclweb.org/anthology/P19-1636/)
- ECtHR cases (https://arxiv.org/abs/2103.13084)
- EU-REG-IR (https://arxiv.org/abs/2101.10726) | 17 | text: Support for legal NLP datasets (EURLEX, ECtHR cases and EU-REG-IR)
Add support for two legal NLP datasets:
- EURLEX (https://www.aclweb.org/anthology/P19-1636/)
- ECtHR cases (https://arxiv.org/abs/2103.13084)
- EU-REG-IR (https://arxiv.org/abs/2101.10726)
@lhoestq thanks for your review.
I shortened the examples in README files and removed `DEFAULT_CONFIG_BUILDER` from `eu_regulatory_ir.py`. |
https://github.com/huggingface/datasets/pull/2111 | Compute WER metric iteratively | I discussed with Patrick and I think we could have a nice addition: have a parameter `concatenate_texts` that, if `True`, uses the old implementation.
By default `concatenate_texts` would be `False`, so that sentences are evaluated independently, and to save resources (the WER computation has a quadratic complexity).
Some users might still want to use the old implementation. | Compute WER metric iteratively to avoid MemoryError.
Fix #2078. | 57 | text: Compute WER metric iteratively
Compute WER metric iteratively to avoid MemoryError.
Fix #2078.
I discussed with Patrick and I think we could have a nice addition: have a parameter `concatenate_texts` that, if `True`, uses the old implementation.
By default `concatenate_texts` would be `False`, so that sentences are evaluated independently, and to save resources (the WER computation has a quadratic complexity).
Some users might still want to use the old implementation. |
https://github.com/huggingface/datasets/pull/2111 | Compute WER metric iteratively | @lhoestq @patrickvonplaten are you sure of the parameter name `concatenate_texts`? I was thinking about something like `iter`... | Compute WER metric iteratively to avoid MemoryError.
Fix #2078. | 17 | text: Compute WER metric iteratively
Compute WER metric iteratively to avoid MemoryError.
Fix #2078.
@lhoestq @patrickvonplaten are you sure of the parameter name `concatenate_texts`? I was thinking about something like `iter`... |
https://github.com/huggingface/datasets/pull/2111 | Compute WER metric iteratively | Not sure about the name, if you can improve it feel free to do so ^^'
The old implementation computes the WER on the concatenation of all the input texts, while the new one makes WER measures computation independent for each reference/prediction pair.
That's why I thought of `concatenate_texts` | Compute WER metric iteratively to avoid MemoryError.
Fix #2078. | 49 | text: Compute WER metric iteratively
Compute WER metric iteratively to avoid MemoryError.
Fix #2078.
Not sure about the name, if you can improve it feel free to do so ^^'
The old implementation computes the WER on the concatenation of all the input texts, while the new one makes WER measures computation independent for each reference/prediction pair.
That's why I thought of `concatenate_texts` |
https://github.com/huggingface/datasets/pull/2111 | Compute WER metric iteratively | @lhoestq yes, but the end user does not necessarily know the details of the implementation of the WER computation.
From the end user perspective I think it might make more sense: how do you want to compute the metric?
- all in once, more RAM memory needed?
- iteratively, less RAM requirements?
Because of that I was thinking of something like `iter` or `iterative`... | Compute WER metric iteratively to avoid MemoryError.
Fix #2078. | 64 | text: Compute WER metric iteratively
Compute WER metric iteratively to avoid MemoryError.
Fix #2078.
@lhoestq yes, but the end user does not necessarily know the details of the implementation of the WER computation.
From the end user perspective I think it might make more sense: how do you want to compute the metric?
- all in once, more RAM memory needed?
- iteratively, less RAM requirements?
Because of that I was thinking of something like `iter` or `iterative`... |
https://github.com/huggingface/datasets/pull/2110 | Fix incorrect assertion in builder.py | Hi ! The SplitInfo is not always available. By default you would get `split_info.num_examples == 0`
So unfortunately we can't use this assertion you suggested | Fix incorrect num_examples comparison assertion in builder.py | 25 | text: Fix incorrect assertion in builder.py
Fix incorrect num_examples comparison assertion in builder.py
Hi ! The SplitInfo is not always available. By default you would get `split_info.num_examples == 0`
So unfortunately we can't use this assertion you suggested |
https://github.com/huggingface/datasets/pull/2110 | Fix incorrect assertion in builder.py | > Hi ! The SplitInfo is not always available. By default you would get `split_info.num_examples == 0`
> So unfortunately we can't use this assertion you suggested
Then it would be better to just remove the assertion, because the existing assertion does nothing. | Fix incorrect num_examples comparison assertion in builder.py | 43 | text: Fix incorrect assertion in builder.py
Fix incorrect num_examples comparison assertion in builder.py
> Hi ! The SplitInfo is not always available. By default you would get `split_info.num_examples == 0`
> So unfortunately we can't use this assertion you suggested
Then it would be better to just remove the assertion, because the existing assertion does nothing. |
https://github.com/huggingface/datasets/pull/2109 | Add more issue templates and customize issue template chooser | If you agree, I could also add a link to [Discussions](https://github.com/huggingface/datasets/discussions) in order to reinforce the use of Discussion to make Questions (instead of Issues).
I could also add some other templates: Bug, Feature Request,... | When opening an issue, it is not evident for the users how to choose a blank issue template. There is a link at the bottom of all the other issue templates (`Donโt see your issue here? Open a blank issue.`), but this is not very visible for users. This is the reason why many users finally chose the `add-dataset` template instead (this is more visible) for issues that indeed are not requesting the addition of a new dataset.
~~With this PR, the default blank issue template would be as visible as the other templates (as the `add-dataset` template), thus making easier for the users to choose it.~~
With this PR:
- more issue templates, besides `add-dataset`, are added: `bug-report` and `feature-request`
- the issue template chooser is customized, so that it now includes a link to `Discussions` for questions | 35 | text: Add more issue templates and customize issue template chooser
When opening an issue, it is not evident for the users how to choose a blank issue template. There is a link at the bottom of all the other issue templates (`Donโt see your issue here? Open a blank issue.`), but this is not very visible for users. This is the reason why many users finally chose the `add-dataset` template instead (this is more visible) for issues that indeed are not requesting the addition of a new dataset.
~~With this PR, the default blank issue template would be as visible as the other templates (as the `add-dataset` template), thus making easier for the users to choose it.~~
With this PR:
- more issue templates, besides `add-dataset`, are added: `bug-report` and `feature-request`
- the issue template chooser is customized, so that it now includes a link to `Discussions` for questions
If you agree, I could also add a link to [Discussions](https://github.com/huggingface/datasets/discussions) in order to reinforce the use of Discussion to make Questions (instead of Issues).
I could also add some other templates: Bug, Feature Request,... |
https://github.com/huggingface/datasets/pull/2107 | Metadata validation | > Also I was wondering this is really needed to have `utils.metadata` as a submodule of `datasets` ? This is only used by the CI so I'm not sure we should have this in the actual `datasets` package.
I'm unclear on the suggestion, would you rather have a root-level `./metadata.py` file? I think it's well where it is, if anything we could move it out of utils and into `datasets` as it could be used by e.g. `DatasetDict` so that users can pull the metadata easily rather than have to reparse the readme.
| - `pydantic` metadata schema with dedicated validators against our taxonomy
- ci script to validate new changes against this schema and start a vertuous loop
- soft validation on tasks ids since we expect the taxonomy to undergo some changes in the near future
for reference with the current validation we have ~365~ 378 datasets with invalid metadata! full error report [_here_.](https://gist.github.com/theo-m/61b3c0c47fc6121d08d3174bd4c2a26b) | 93 | text: Metadata validation
- `pydantic` metadata schema with dedicated validators against our taxonomy
- ci script to validate new changes against this schema and start a vertuous loop
- soft validation on tasks ids since we expect the taxonomy to undergo some changes in the near future
for reference with the current validation we have ~365~ 378 datasets with invalid metadata! full error report [_here_.](https://gist.github.com/theo-m/61b3c0c47fc6121d08d3174bd4c2a26b)
> Also I was wondering this is really needed to have `utils.metadata` as a submodule of `datasets` ? This is only used by the CI so I'm not sure we should have this in the actual `datasets` package.
I'm unclear on the suggestion, would you rather have a root-level `./metadata.py` file? I think it's well where it is, if anything we could move it out of utils and into `datasets` as it could be used by e.g. `DatasetDict` so that users can pull the metadata easily rather than have to reparse the readme.
|
https://github.com/huggingface/datasets/pull/2107 | Metadata validation | Ok that makes sense if we want to have functions that parse the metadata for users | - `pydantic` metadata schema with dedicated validators against our taxonomy
- ci script to validate new changes against this schema and start a vertuous loop
- soft validation on tasks ids since we expect the taxonomy to undergo some changes in the near future
for reference with the current validation we have ~365~ 378 datasets with invalid metadata! full error report [_here_.](https://gist.github.com/theo-m/61b3c0c47fc6121d08d3174bd4c2a26b) | 16 | text: Metadata validation
- `pydantic` metadata schema with dedicated validators against our taxonomy
- ci script to validate new changes against this schema and start a vertuous loop
- soft validation on tasks ids since we expect the taxonomy to undergo some changes in the near future
for reference with the current validation we have ~365~ 378 datasets with invalid metadata! full error report [_here_.](https://gist.github.com/theo-m/61b3c0c47fc6121d08d3174bd4c2a26b)
Ok that makes sense if we want to have functions that parse the metadata for users |
https://github.com/huggingface/datasets/pull/2107 | Metadata validation | Hi @theo-m @lhoestq
This seems very interesting. Should I add the descriptions to the PR on `datasets-tagging`? Alternatively, I can also create a google-sheet/markdown table :)
Sorry for the delay in responding.
Thanks,
Gunjan | - `pydantic` metadata schema with dedicated validators against our taxonomy
- ci script to validate new changes against this schema and start a vertuous loop
- soft validation on tasks ids since we expect the taxonomy to undergo some changes in the near future
for reference with the current validation we have ~365~ 378 datasets with invalid metadata! full error report [_here_.](https://gist.github.com/theo-m/61b3c0c47fc6121d08d3174bd4c2a26b) | 34 | text: Metadata validation
- `pydantic` metadata schema with dedicated validators against our taxonomy
- ci script to validate new changes against this schema and start a vertuous loop
- soft validation on tasks ids since we expect the taxonomy to undergo some changes in the near future
for reference with the current validation we have ~365~ 378 datasets with invalid metadata! full error report [_here_.](https://gist.github.com/theo-m/61b3c0c47fc6121d08d3174bd4c2a26b)
Hi @theo-m @lhoestq
This seems very interesting. Should I add the descriptions to the PR on `datasets-tagging`? Alternatively, I can also create a google-sheet/markdown table :)
Sorry for the delay in responding.
Thanks,
Gunjan |
https://github.com/huggingface/datasets/pull/2107 | Metadata validation | > Hi @theo-m @lhoestq
>
> This seems very interesting. Should I add the descriptions to the PR on `datasets-tagging`? Alternatively, I can also create a google-sheet/markdown table :)
>
> Sorry for the delay in responding.
>
> Thanks,
> Gunjan
Hi @gchhablani, yes I think at the moment the best solution is for you to write in `datasets-tagging`, as the PR will allow us to discuss and review, even though the work will be ported to this repo in the end.
Or we wait for this to be merged and you reopen the PR here, your call :) | - `pydantic` metadata schema with dedicated validators against our taxonomy
- ci script to validate new changes against this schema and start a vertuous loop
- soft validation on tasks ids since we expect the taxonomy to undergo some changes in the near future
for reference with the current validation we have ~365~ 378 datasets with invalid metadata! full error report [_here_.](https://gist.github.com/theo-m/61b3c0c47fc6121d08d3174bd4c2a26b) | 100 | text: Metadata validation
- `pydantic` metadata schema with dedicated validators against our taxonomy
- ci script to validate new changes against this schema and start a vertuous loop
- soft validation on tasks ids since we expect the taxonomy to undergo some changes in the near future
for reference with the current validation we have ~365~ 378 datasets with invalid metadata! full error report [_here_.](https://gist.github.com/theo-m/61b3c0c47fc6121d08d3174bd4c2a26b)
> Hi @theo-m @lhoestq
>
> This seems very interesting. Should I add the descriptions to the PR on `datasets-tagging`? Alternatively, I can also create a google-sheet/markdown table :)
>
> Sorry for the delay in responding.
>
> Thanks,
> Gunjan
Hi @gchhablani, yes I think at the moment the best solution is for you to write in `datasets-tagging`, as the PR will allow us to discuss and review, even though the work will be ported to this repo in the end.
Or we wait for this to be merged and you reopen the PR here, your call :) |
https://github.com/huggingface/datasets/pull/2101 | MIAM dataset - new citation details | Hi !
Looks like there's a unicode error in the new citation in the miam.py file.
Could you try to fix it ? Not sure from which character it comes from though
You can test if it works on your side with
```
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_miam
``` | Hi @lhoestq, I have updated the citations to reference an OpenReview preprint. | 47 | text: MIAM dataset - new citation details
Hi @lhoestq, I have updated the citations to reference an OpenReview preprint.
Hi !
Looks like there's a unicode error in the new citation in the miam.py file.
Could you try to fix it ? Not sure from which character it comes from though
You can test if it works on your side with
```
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_miam
``` |
https://github.com/huggingface/datasets/pull/2100 | Fix deprecated warning message and docstring | I have a question: what about `dictionary_encode_column_`?
- It is deprecated in Dataset, but it recommends using a non-existing method instead: `Dataset.dictionary_encode_column` does not exist.
- It is NOT deprecated in DatasetDict. | Fix deprecated warnings:
- Use deprecated Sphinx directive in docstring
- Fix format of deprecated message
- Raise FutureWarning | 32 | text: Fix deprecated warning message and docstring
Fix deprecated warnings:
- Use deprecated Sphinx directive in docstring
- Fix format of deprecated message
- Raise FutureWarning
I have a question: what about `dictionary_encode_column_`?
- It is deprecated in Dataset, but it recommends using a non-existing method instead: `Dataset.dictionary_encode_column` does not exist.
- It is NOT deprecated in DatasetDict. |
https://github.com/huggingface/datasets/pull/2100 | Fix deprecated warning message and docstring | `dictionary_encode_column_ ` should be deprecated since it never worked correctly. It will be removed in a major release.
This has to be deprecated in `DatasetDict` as well.
And `Dataset.dictionary_encode_column` doesn't exist indeed. | Fix deprecated warnings:
- Use deprecated Sphinx directive in docstring
- Fix format of deprecated message
- Raise FutureWarning | 32 | text: Fix deprecated warning message and docstring
Fix deprecated warnings:
- Use deprecated Sphinx directive in docstring
- Fix format of deprecated message
- Raise FutureWarning
`dictionary_encode_column_ ` should be deprecated since it never worked correctly. It will be removed in a major release.
This has to be deprecated in `DatasetDict` as well.
And `Dataset.dictionary_encode_column` doesn't exist indeed. |
https://github.com/huggingface/datasets/pull/2093 | Fix: Allows a feature to be named "_type" | Nice thank you !
This looks like a pretty simple yet effective fix ;)
Could you just add a test in `test_features.py` to make sure that you can create `features` with a `_type` field and that it is possible to convert it as a dict and reload it ?
```python
from datasets import Features, Value
# We usually use `asdict` on a `DatasetInfo` object which is a dataclass instance that contains the features.
# So we need the conversion of features to dict to work.
# You can test that using `dataclasses._asdict_inner`.
# This is the function used by `dataclasses.asdict` to convert a dataclass instance attribute to a dict
from dataclasses import _asdict_inner
f = Features({"_type": Value("string")})
reloaded_f = Features.from_dict(_asdict_inner(f, dict))
assert reloaded_f == f
``` | This PR tries to fix issue #1110. Sorry for taking so long to come back to this.
It's a simple fix, but i am not sure if it works for all possible types of `obj`. Let me know what you think @lhoestq | 126 | text: Fix: Allows a feature to be named "_type"
This PR tries to fix issue #1110. Sorry for taking so long to come back to this.
It's a simple fix, but i am not sure if it works for all possible types of `obj`. Let me know what you think @lhoestq
Nice thank you !
This looks like a pretty simple yet effective fix ;)
Could you just add a test in `test_features.py` to make sure that you can create `features` with a `_type` field and that it is possible to convert it as a dict and reload it ?
```python
from datasets import Features, Value
# We usually use `asdict` on a `DatasetInfo` object which is a dataclass instance that contains the features.
# So we need the conversion of features to dict to work.
# You can test that using `dataclasses._asdict_inner`.
# This is the function used by `dataclasses.asdict` to convert a dataclass instance attribute to a dict
from dataclasses import _asdict_inner
f = Features({"_type": Value("string")})
reloaded_f = Features.from_dict(_asdict_inner(f, dict))
assert reloaded_f == f
``` |
https://github.com/huggingface/datasets/pull/2093 | Fix: Allows a feature to be named "_type" | Sure, i will add a test.
One question: are the posted benchmarks reliable? The extra type check seems to add quite some overhead judging by the relative differences. Do you think this is an issue? | This PR tries to fix issue #1110. Sorry for taking so long to come back to this.
It's a simple fix, but i am not sure if it works for all possible types of `obj`. Let me know what you think @lhoestq | 35 | text: Fix: Allows a feature to be named "_type"
This PR tries to fix issue #1110. Sorry for taking so long to come back to this.
It's a simple fix, but i am not sure if it works for all possible types of `obj`. Let me know what you think @lhoestq
Sure, i will add a test.
One question: are the posted benchmarks reliable? The extra type check seems to add quite some overhead judging by the relative differences. Do you think this is an issue? |
https://github.com/huggingface/datasets/pull/2093 | Fix: Allows a feature to be named "_type" | The benchmark has a bit of noise, the values are fine ;)
especially in the change you did since the overhead added is negligible. | This PR tries to fix issue #1110. Sorry for taking so long to come back to this.
It's a simple fix, but i am not sure if it works for all possible types of `obj`. Let me know what you think @lhoestq | 24 | text: Fix: Allows a feature to be named "_type"
This PR tries to fix issue #1110. Sorry for taking so long to come back to this.
It's a simple fix, but i am not sure if it works for all possible types of `obj`. Let me know what you think @lhoestq
The benchmark has a bit of noise, the values are fine ;)
especially in the change you did since the overhead added is negligible. |
https://github.com/huggingface/datasets/pull/2093 | Fix: Allows a feature to be named "_type" | Ok, i added the test you described above.
I avoided importing the private `_asdict_inner` method and directly used the `DatasetInfo` class, if this is ok with you. Thanks a lot for your support during this PR! | This PR tries to fix issue #1110. Sorry for taking so long to come back to this.
It's a simple fix, but i am not sure if it works for all possible types of `obj`. Let me know what you think @lhoestq | 36 | text: Fix: Allows a feature to be named "_type"
This PR tries to fix issue #1110. Sorry for taking so long to come back to this.
It's a simple fix, but i am not sure if it works for all possible types of `obj`. Let me know what you think @lhoestq
Ok, i added the test you described above.
I avoided importing the private `_asdict_inner` method and directly used the `DatasetInfo` class, if this is ok with you. Thanks a lot for your support during this PR! |
https://github.com/huggingface/datasets/pull/2087 | Update metadata if dataset features are modified | @lhoestq I'll try to add a test later if you think this approach with the wrapper is good. | This PR adds a decorator that updates the dataset metadata if a previously executed transform modifies its features.
Fixes #2083
| 18 | text: Update metadata if dataset features are modified
This PR adds a decorator that updates the dataset metadata if a previously executed transform modifies its features.
Fixes #2083
@lhoestq I'll try to add a test later if you think this approach with the wrapper is good. |
https://github.com/huggingface/datasets/pull/2087 | Update metadata if dataset features are modified | @lhoestq Added a test. To verify that this change fixes the problem, replace:
```
!pip install datasets==1.5
```
with:
```
!pip install git+https://github.com/mariosasko/datasets-1.git@update-metadata
```
in the first cell of the notebook that is attached to the linked issue.
The CI failure is unrelated I think (building the docs locally doesn't throw an error). | This PR adds a decorator that updates the dataset metadata if a previously executed transform modifies its features.
Fixes #2083
| 53 | text: Update metadata if dataset features are modified
This PR adds a decorator that updates the dataset metadata if a previously executed transform modifies its features.
Fixes #2083
@lhoestq Added a test. To verify that this change fixes the problem, replace:
```
!pip install datasets==1.5
```
with:
```
!pip install git+https://github.com/mariosasko/datasets-1.git@update-metadata
```
in the first cell of the notebook that is attached to the linked issue.
The CI failure is unrelated I think (building the docs locally doesn't throw an error). |
https://github.com/huggingface/datasets/pull/2086 | change user permissions to -rw-r--r-- | I tried this with `ade_corpus_v2` dataset. `ade_corpus_v2-train.arrow` (downloaded dataset) and `cache-25d41a4d3c2d8a25.arrow` (ran a mapping function on the dataset) both had file permission with octal value of `0644`. | Fix for #2065 | 27 | text: change user permissions to -rw-r--r--
Fix for #2065
I tried this with `ade_corpus_v2` dataset. `ade_corpus_v2-train.arrow` (downloaded dataset) and `cache-25d41a4d3c2d8a25.arrow` (ran a mapping function on the dataset) both had file permission with octal value of `0644`. |
https://github.com/huggingface/datasets/pull/2074 | Fix size categories in YAML Tags | > It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
We can also update the task lists here: https://github.com/huggingface/datasets-tagging/blob/main/task_set.json | This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for dataset in sorted(os.listdir('./datasets/')):
if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:
infos = {}
stats = {}
st = ''
with open(f'datasets/{dataset}/README.md') as f:
d = f.read()
start_dash = d.find('---') + 3
end_dash = d[start_dash:].find('---') + 3
rest_text = d[end_dash + 3:]
try:
full_yaml = OmegaConf.create(d[start_dash:end_dash])
readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)
except Exception as e:
print(e)
continue
try:
with open(f'datasets/{dataset}/dataset_infos.json') as f:
data = json.load(f)
except Exception as e:
print(e)
continue # Skip those without infos.
done_set = set([])
num_keys = len(data.keys())
for keys in data:
# dataset = load_dataset('opus100', f'{dirs}')
total = 0
for split in data[keys]['splits']:
total = total + data[keys]['splits'][split]['num_examples']
if total < 1000:
st += "- n<1K" + '\n'
infos[keys] = ["n<1K"]
elif total >= 1000 and total < 10000:
infos[keys] = ["1K<n<10K"]
elif total >= 10000 and total < 100000:
infos[keys] = ["10K<n<100K"]
elif total >= 100000 and total < 1000000:
infos[keys] = ["100K<n<1M"]
elif total >= 1000000 and total < 10000000:
infos[keys] = ["1M<n<10M"]
elif total >= 10000000 and total < 100000000:
infos[keys] = ["10M<n<100M"]
elif total >= 100000000 and total < 1000000000:
infos[keys] = ["100M<n<1B"]
elif total >= 1000000000 and total < 10000000000:
infos[keys] = ["1B<n<10B"]
elif total >= 10000000000 and total < 100000000000:
infos[keys] = ["10B<n<100B"]
elif total >= 100000000000 and total < 1000000000000:
infos[keys] = ["100B<n<1T"]
else:
infos[keys] = ["n>1T"]
done_set = done_set.union(infos[keys])
if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):
print('-' * 30)
print(done_set)
print(f"Changing Full YAML for {dataset}")
print(OmegaConf.to_yaml(full_yaml))
if len(done_set) == 1:
full_yaml['size_categories'] = list(done_set)
else:
full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])
full_yaml_string = OmegaConf.to_yaml(full_yaml)
print('-' * 30)
print(full_yaml_string)
inp = input('Do you wish to continue?(Y/N)')
if inp == 'Y':
with open(f'./datasets/{dataset}/README.md', 'w') as f:
f.write('---\n')
f.write(full_yaml_string)
f.write('---')
f.write(rest_text)
else:
break
```
Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.
EDIT:
It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
EDIT:
I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too. | 53 | text: Fix size categories in YAML Tags
This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for dataset in sorted(os.listdir('./datasets/')):
if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:
infos = {}
stats = {}
st = ''
with open(f'datasets/{dataset}/README.md') as f:
d = f.read()
start_dash = d.find('---') + 3
end_dash = d[start_dash:].find('---') + 3
rest_text = d[end_dash + 3:]
try:
full_yaml = OmegaConf.create(d[start_dash:end_dash])
readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)
except Exception as e:
print(e)
continue
try:
with open(f'datasets/{dataset}/dataset_infos.json') as f:
data = json.load(f)
except Exception as e:
print(e)
continue # Skip those without infos.
done_set = set([])
num_keys = len(data.keys())
for keys in data:
# dataset = load_dataset('opus100', f'{dirs}')
total = 0
for split in data[keys]['splits']:
total = total + data[keys]['splits'][split]['num_examples']
if total < 1000:
st += "- n<1K" + '\n'
infos[keys] = ["n<1K"]
elif total >= 1000 and total < 10000:
infos[keys] = ["1K<n<10K"]
elif total >= 10000 and total < 100000:
infos[keys] = ["10K<n<100K"]
elif total >= 100000 and total < 1000000:
infos[keys] = ["100K<n<1M"]
elif total >= 1000000 and total < 10000000:
infos[keys] = ["1M<n<10M"]
elif total >= 10000000 and total < 100000000:
infos[keys] = ["10M<n<100M"]
elif total >= 100000000 and total < 1000000000:
infos[keys] = ["100M<n<1B"]
elif total >= 1000000000 and total < 10000000000:
infos[keys] = ["1B<n<10B"]
elif total >= 10000000000 and total < 100000000000:
infos[keys] = ["10B<n<100B"]
elif total >= 100000000000 and total < 1000000000000:
infos[keys] = ["100B<n<1T"]
else:
infos[keys] = ["n>1T"]
done_set = done_set.union(infos[keys])
if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):
print('-' * 30)
print(done_set)
print(f"Changing Full YAML for {dataset}")
print(OmegaConf.to_yaml(full_yaml))
if len(done_set) == 1:
full_yaml['size_categories'] = list(done_set)
else:
full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])
full_yaml_string = OmegaConf.to_yaml(full_yaml)
print('-' * 30)
print(full_yaml_string)
inp = input('Do you wish to continue?(Y/N)')
if inp == 'Y':
with open(f'./datasets/{dataset}/README.md', 'w') as f:
f.write('---\n')
f.write(full_yaml_string)
f.write('---')
f.write(rest_text)
else:
break
```
Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.
EDIT:
It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
EDIT:
I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too.
> It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
We can also update the task lists here: https://github.com/huggingface/datasets-tagging/blob/main/task_set.json |
https://github.com/huggingface/datasets/pull/2074 | Fix size categories in YAML Tags | Hi @lhoestq,
Thanks for approving.
How do I add the new categories to the tagging app? What I have added is till `1T` and not `1M`.
I'll also check the task list :)
Thanks,
Gunjan | This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for dataset in sorted(os.listdir('./datasets/')):
if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:
infos = {}
stats = {}
st = ''
with open(f'datasets/{dataset}/README.md') as f:
d = f.read()
start_dash = d.find('---') + 3
end_dash = d[start_dash:].find('---') + 3
rest_text = d[end_dash + 3:]
try:
full_yaml = OmegaConf.create(d[start_dash:end_dash])
readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)
except Exception as e:
print(e)
continue
try:
with open(f'datasets/{dataset}/dataset_infos.json') as f:
data = json.load(f)
except Exception as e:
print(e)
continue # Skip those without infos.
done_set = set([])
num_keys = len(data.keys())
for keys in data:
# dataset = load_dataset('opus100', f'{dirs}')
total = 0
for split in data[keys]['splits']:
total = total + data[keys]['splits'][split]['num_examples']
if total < 1000:
st += "- n<1K" + '\n'
infos[keys] = ["n<1K"]
elif total >= 1000 and total < 10000:
infos[keys] = ["1K<n<10K"]
elif total >= 10000 and total < 100000:
infos[keys] = ["10K<n<100K"]
elif total >= 100000 and total < 1000000:
infos[keys] = ["100K<n<1M"]
elif total >= 1000000 and total < 10000000:
infos[keys] = ["1M<n<10M"]
elif total >= 10000000 and total < 100000000:
infos[keys] = ["10M<n<100M"]
elif total >= 100000000 and total < 1000000000:
infos[keys] = ["100M<n<1B"]
elif total >= 1000000000 and total < 10000000000:
infos[keys] = ["1B<n<10B"]
elif total >= 10000000000 and total < 100000000000:
infos[keys] = ["10B<n<100B"]
elif total >= 100000000000 and total < 1000000000000:
infos[keys] = ["100B<n<1T"]
else:
infos[keys] = ["n>1T"]
done_set = done_set.union(infos[keys])
if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):
print('-' * 30)
print(done_set)
print(f"Changing Full YAML for {dataset}")
print(OmegaConf.to_yaml(full_yaml))
if len(done_set) == 1:
full_yaml['size_categories'] = list(done_set)
else:
full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])
full_yaml_string = OmegaConf.to_yaml(full_yaml)
print('-' * 30)
print(full_yaml_string)
inp = input('Do you wish to continue?(Y/N)')
if inp == 'Y':
with open(f'./datasets/{dataset}/README.md', 'w') as f:
f.write('---\n')
f.write(full_yaml_string)
f.write('---')
f.write(rest_text)
else:
break
```
Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.
EDIT:
It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
EDIT:
I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too. | 35 | text: Fix size categories in YAML Tags
This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for dataset in sorted(os.listdir('./datasets/')):
if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:
infos = {}
stats = {}
st = ''
with open(f'datasets/{dataset}/README.md') as f:
d = f.read()
start_dash = d.find('---') + 3
end_dash = d[start_dash:].find('---') + 3
rest_text = d[end_dash + 3:]
try:
full_yaml = OmegaConf.create(d[start_dash:end_dash])
readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)
except Exception as e:
print(e)
continue
try:
with open(f'datasets/{dataset}/dataset_infos.json') as f:
data = json.load(f)
except Exception as e:
print(e)
continue # Skip those without infos.
done_set = set([])
num_keys = len(data.keys())
for keys in data:
# dataset = load_dataset('opus100', f'{dirs}')
total = 0
for split in data[keys]['splits']:
total = total + data[keys]['splits'][split]['num_examples']
if total < 1000:
st += "- n<1K" + '\n'
infos[keys] = ["n<1K"]
elif total >= 1000 and total < 10000:
infos[keys] = ["1K<n<10K"]
elif total >= 10000 and total < 100000:
infos[keys] = ["10K<n<100K"]
elif total >= 100000 and total < 1000000:
infos[keys] = ["100K<n<1M"]
elif total >= 1000000 and total < 10000000:
infos[keys] = ["1M<n<10M"]
elif total >= 10000000 and total < 100000000:
infos[keys] = ["10M<n<100M"]
elif total >= 100000000 and total < 1000000000:
infos[keys] = ["100M<n<1B"]
elif total >= 1000000000 and total < 10000000000:
infos[keys] = ["1B<n<10B"]
elif total >= 10000000000 and total < 100000000000:
infos[keys] = ["10B<n<100B"]
elif total >= 100000000000 and total < 1000000000000:
infos[keys] = ["100B<n<1T"]
else:
infos[keys] = ["n>1T"]
done_set = done_set.union(infos[keys])
if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):
print('-' * 30)
print(done_set)
print(f"Changing Full YAML for {dataset}")
print(OmegaConf.to_yaml(full_yaml))
if len(done_set) == 1:
full_yaml['size_categories'] = list(done_set)
else:
full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])
full_yaml_string = OmegaConf.to_yaml(full_yaml)
print('-' * 30)
print(full_yaml_string)
inp = input('Do you wish to continue?(Y/N)')
if inp == 'Y':
with open(f'./datasets/{dataset}/README.md', 'w') as f:
f.write('---\n')
f.write(full_yaml_string)
f.write('---')
f.write(rest_text)
else:
break
```
Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.
EDIT:
It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
EDIT:
I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too.
Hi @lhoestq,
Thanks for approving.
How do I add the new categories to the tagging app? What I have added is till `1T` and not `1M`.
I'll also check the task list :)
Thanks,
Gunjan |
https://github.com/huggingface/datasets/pull/2074 | Fix size categories in YAML Tags | Hi @lhoestq,
I have made a PR for size categories on `datasets-tagging`
For tags, I have thought of adding more tags and categories, based on what I know about the existing datasets, any list will not be exhaustive because the contributors can be very specific or very general. Hence, there could be a continuous process of evaluating existing tags and adding more and more.
```json
{
"image-classification": {
"description": "image classification tasks",
"options": [
"multi-class-classification",
"multi-label-classification",
"other"
]
},
"conditional-text-generation": {
"description": "data-to-text and text transduction tasks such as translation or summarization",
"options": [
"machine-translation",
"sentence-splitting-fusion",
"extractive-and-abstractive-summarization",
"abstractive-summarization",
"extractive-summarization",
"multi-document-summarization",
"table-to-text",
"text-simplification",
"explanation-generation",
"stuctured-to-text",
"other"
]
},
"conditional-speech-generation": {
"description": "speech generation tasks",
"options": [
"text-to-speech",
"speech-translation",
"other"
]
},
"conditional-structure-generation":{
"description": "text or speech to structured data",
"options":[
"knowlege-graph-mining",
"code-generation",
]
},
"question-answering": {
"description": "question answering tasks",
"options": [
"open-domain-qa",
"closed-domain-qa",
"multiple-choice-qa",
"extractive-qa",
"abstractive-qa",
"conversational-qa",
"multi-document-qa",
"other"
]
},
"speech-classification": {
"description": "speech to label tasks",
"options": [
"other"
]
},
"sequence-modeling": {
"description": "such as language, speech or dialogue modeling",
"options": [
"dialogue-modeling",
"language-modeling",
"speech-modeling",
"multi-turn",
"slot-filling",
"other"
]
},
"speech-recognition": {
"description": "speech to text tasks",
"options": [
"automatic-speech-recognition",
"other"
]
},
"structure-prediction": {
"description": "predicting structural properties of the text, such as syntax",
"options": [
"coreference-resolution",
"named-entity-recognition",
"part-of-speech-tagging",
"parsing",
"sentence-segmentation",
"single-span-prediction",
"multi-span-prediction",
"clause-or-phrase-segmentation",
"dependency-parsing",
"constituency-parsing",
"other"
]
},
"text-classification": {
"description": "predicting a class index or boolean value",
"options": [
"acceptability-classification",
"entity-linking-classification",
"relation-extraction",
"common-sense-reasoning",
"fact-checking",
"intent-classification",
"multi-class-classification",
"multi-label-classification",
"natural-language-inference",
"semantic-similarity-classification",
"sentiment-classification",
"topic-classification",
"emotion-classification",
"token-classification",
"word-sense-disambiguation",
"offense-classification",
"hate-speech-classification",
"language-classification",
"bias-classification",
"other"
]
},
"text-retrieval": {
"description": "information or text retrieval tasks",
"options": [
"document-retrieval",
"utterance-retrieval",
"entity-linking-retrieval",
"fact-checking-retrieval",
"other"
]
},
"text-scoring": {
"description": "text scoring tasks, predicting a real valued score for some text",
"options": [
"semantic-similarity-scoring",
"sentiment-scoring",
"other"
]
},
"other": {
"description": "raw data or other task families",
"options": [
"data-mining",
"raw-text",
"raw-speech",
"raw-image",
"other"
]
}
}
```
I'll sort this when adding it to the .json. Also, I'll change categories according to this if this seems okay to you and commit it to this PR.
I'll also fix spelling others, and some categories which are partially correct, for e.g. `other-machine-translation` to the correct tag.
Lastly, with the options also we can add a description to make it easier for the users to understand what we mean by each option. Example, for "emotion-classification", we can explain what kinds of data we are talking about, or what we mean by "single-span-prediction", etc. | This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for dataset in sorted(os.listdir('./datasets/')):
if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:
infos = {}
stats = {}
st = ''
with open(f'datasets/{dataset}/README.md') as f:
d = f.read()
start_dash = d.find('---') + 3
end_dash = d[start_dash:].find('---') + 3
rest_text = d[end_dash + 3:]
try:
full_yaml = OmegaConf.create(d[start_dash:end_dash])
readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)
except Exception as e:
print(e)
continue
try:
with open(f'datasets/{dataset}/dataset_infos.json') as f:
data = json.load(f)
except Exception as e:
print(e)
continue # Skip those without infos.
done_set = set([])
num_keys = len(data.keys())
for keys in data:
# dataset = load_dataset('opus100', f'{dirs}')
total = 0
for split in data[keys]['splits']:
total = total + data[keys]['splits'][split]['num_examples']
if total < 1000:
st += "- n<1K" + '\n'
infos[keys] = ["n<1K"]
elif total >= 1000 and total < 10000:
infos[keys] = ["1K<n<10K"]
elif total >= 10000 and total < 100000:
infos[keys] = ["10K<n<100K"]
elif total >= 100000 and total < 1000000:
infos[keys] = ["100K<n<1M"]
elif total >= 1000000 and total < 10000000:
infos[keys] = ["1M<n<10M"]
elif total >= 10000000 and total < 100000000:
infos[keys] = ["10M<n<100M"]
elif total >= 100000000 and total < 1000000000:
infos[keys] = ["100M<n<1B"]
elif total >= 1000000000 and total < 10000000000:
infos[keys] = ["1B<n<10B"]
elif total >= 10000000000 and total < 100000000000:
infos[keys] = ["10B<n<100B"]
elif total >= 100000000000 and total < 1000000000000:
infos[keys] = ["100B<n<1T"]
else:
infos[keys] = ["n>1T"]
done_set = done_set.union(infos[keys])
if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):
print('-' * 30)
print(done_set)
print(f"Changing Full YAML for {dataset}")
print(OmegaConf.to_yaml(full_yaml))
if len(done_set) == 1:
full_yaml['size_categories'] = list(done_set)
else:
full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])
full_yaml_string = OmegaConf.to_yaml(full_yaml)
print('-' * 30)
print(full_yaml_string)
inp = input('Do you wish to continue?(Y/N)')
if inp == 'Y':
with open(f'./datasets/{dataset}/README.md', 'w') as f:
f.write('---\n')
f.write(full_yaml_string)
f.write('---')
f.write(rest_text)
else:
break
```
Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.
EDIT:
It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
EDIT:
I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too. | 408 | text: Fix size categories in YAML Tags
This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for dataset in sorted(os.listdir('./datasets/')):
if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:
infos = {}
stats = {}
st = ''
with open(f'datasets/{dataset}/README.md') as f:
d = f.read()
start_dash = d.find('---') + 3
end_dash = d[start_dash:].find('---') + 3
rest_text = d[end_dash + 3:]
try:
full_yaml = OmegaConf.create(d[start_dash:end_dash])
readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)
except Exception as e:
print(e)
continue
try:
with open(f'datasets/{dataset}/dataset_infos.json') as f:
data = json.load(f)
except Exception as e:
print(e)
continue # Skip those without infos.
done_set = set([])
num_keys = len(data.keys())
for keys in data:
# dataset = load_dataset('opus100', f'{dirs}')
total = 0
for split in data[keys]['splits']:
total = total + data[keys]['splits'][split]['num_examples']
if total < 1000:
st += "- n<1K" + '\n'
infos[keys] = ["n<1K"]
elif total >= 1000 and total < 10000:
infos[keys] = ["1K<n<10K"]
elif total >= 10000 and total < 100000:
infos[keys] = ["10K<n<100K"]
elif total >= 100000 and total < 1000000:
infos[keys] = ["100K<n<1M"]
elif total >= 1000000 and total < 10000000:
infos[keys] = ["1M<n<10M"]
elif total >= 10000000 and total < 100000000:
infos[keys] = ["10M<n<100M"]
elif total >= 100000000 and total < 1000000000:
infos[keys] = ["100M<n<1B"]
elif total >= 1000000000 and total < 10000000000:
infos[keys] = ["1B<n<10B"]
elif total >= 10000000000 and total < 100000000000:
infos[keys] = ["10B<n<100B"]
elif total >= 100000000000 and total < 1000000000000:
infos[keys] = ["100B<n<1T"]
else:
infos[keys] = ["n>1T"]
done_set = done_set.union(infos[keys])
if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):
print('-' * 30)
print(done_set)
print(f"Changing Full YAML for {dataset}")
print(OmegaConf.to_yaml(full_yaml))
if len(done_set) == 1:
full_yaml['size_categories'] = list(done_set)
else:
full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])
full_yaml_string = OmegaConf.to_yaml(full_yaml)
print('-' * 30)
print(full_yaml_string)
inp = input('Do you wish to continue?(Y/N)')
if inp == 'Y':
with open(f'./datasets/{dataset}/README.md', 'w') as f:
f.write('---\n')
f.write(full_yaml_string)
f.write('---')
f.write(rest_text)
else:
break
```
Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.
EDIT:
It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
EDIT:
I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too.
Hi @lhoestq,
I have made a PR for size categories on `datasets-tagging`
For tags, I have thought of adding more tags and categories, based on what I know about the existing datasets, any list will not be exhaustive because the contributors can be very specific or very general. Hence, there could be a continuous process of evaluating existing tags and adding more and more.
```json
{
"image-classification": {
"description": "image classification tasks",
"options": [
"multi-class-classification",
"multi-label-classification",
"other"
]
},
"conditional-text-generation": {
"description": "data-to-text and text transduction tasks such as translation or summarization",
"options": [
"machine-translation",
"sentence-splitting-fusion",
"extractive-and-abstractive-summarization",
"abstractive-summarization",
"extractive-summarization",
"multi-document-summarization",
"table-to-text",
"text-simplification",
"explanation-generation",
"stuctured-to-text",
"other"
]
},
"conditional-speech-generation": {
"description": "speech generation tasks",
"options": [
"text-to-speech",
"speech-translation",
"other"
]
},
"conditional-structure-generation":{
"description": "text or speech to structured data",
"options":[
"knowlege-graph-mining",
"code-generation",
]
},
"question-answering": {
"description": "question answering tasks",
"options": [
"open-domain-qa",
"closed-domain-qa",
"multiple-choice-qa",
"extractive-qa",
"abstractive-qa",
"conversational-qa",
"multi-document-qa",
"other"
]
},
"speech-classification": {
"description": "speech to label tasks",
"options": [
"other"
]
},
"sequence-modeling": {
"description": "such as language, speech or dialogue modeling",
"options": [
"dialogue-modeling",
"language-modeling",
"speech-modeling",
"multi-turn",
"slot-filling",
"other"
]
},
"speech-recognition": {
"description": "speech to text tasks",
"options": [
"automatic-speech-recognition",
"other"
]
},
"structure-prediction": {
"description": "predicting structural properties of the text, such as syntax",
"options": [
"coreference-resolution",
"named-entity-recognition",
"part-of-speech-tagging",
"parsing",
"sentence-segmentation",
"single-span-prediction",
"multi-span-prediction",
"clause-or-phrase-segmentation",
"dependency-parsing",
"constituency-parsing",
"other"
]
},
"text-classification": {
"description": "predicting a class index or boolean value",
"options": [
"acceptability-classification",
"entity-linking-classification",
"relation-extraction",
"common-sense-reasoning",
"fact-checking",
"intent-classification",
"multi-class-classification",
"multi-label-classification",
"natural-language-inference",
"semantic-similarity-classification",
"sentiment-classification",
"topic-classification",
"emotion-classification",
"token-classification",
"word-sense-disambiguation",
"offense-classification",
"hate-speech-classification",
"language-classification",
"bias-classification",
"other"
]
},
"text-retrieval": {
"description": "information or text retrieval tasks",
"options": [
"document-retrieval",
"utterance-retrieval",
"entity-linking-retrieval",
"fact-checking-retrieval",
"other"
]
},
"text-scoring": {
"description": "text scoring tasks, predicting a real valued score for some text",
"options": [
"semantic-similarity-scoring",
"sentiment-scoring",
"other"
]
},
"other": {
"description": "raw data or other task families",
"options": [
"data-mining",
"raw-text",
"raw-speech",
"raw-image",
"other"
]
}
}
```
I'll sort this when adding it to the .json. Also, I'll change categories according to this if this seems okay to you and commit it to this PR.
I'll also fix spelling others, and some categories which are partially correct, for e.g. `other-machine-translation` to the correct tag.
Lastly, with the options also we can add a description to make it easier for the users to understand what we mean by each option. Example, for "emotion-classification", we can explain what kinds of data we are talking about, or what we mean by "single-span-prediction", etc. |
https://github.com/huggingface/datasets/pull/2074 | Fix size categories in YAML Tags | Good idea thank you ! Can you open a PR on datasets-tagging for the tasks as well ?
Also you can update the dataset card with the new tasks categories in another PR if you don't mind | This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for dataset in sorted(os.listdir('./datasets/')):
if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:
infos = {}
stats = {}
st = ''
with open(f'datasets/{dataset}/README.md') as f:
d = f.read()
start_dash = d.find('---') + 3
end_dash = d[start_dash:].find('---') + 3
rest_text = d[end_dash + 3:]
try:
full_yaml = OmegaConf.create(d[start_dash:end_dash])
readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)
except Exception as e:
print(e)
continue
try:
with open(f'datasets/{dataset}/dataset_infos.json') as f:
data = json.load(f)
except Exception as e:
print(e)
continue # Skip those without infos.
done_set = set([])
num_keys = len(data.keys())
for keys in data:
# dataset = load_dataset('opus100', f'{dirs}')
total = 0
for split in data[keys]['splits']:
total = total + data[keys]['splits'][split]['num_examples']
if total < 1000:
st += "- n<1K" + '\n'
infos[keys] = ["n<1K"]
elif total >= 1000 and total < 10000:
infos[keys] = ["1K<n<10K"]
elif total >= 10000 and total < 100000:
infos[keys] = ["10K<n<100K"]
elif total >= 100000 and total < 1000000:
infos[keys] = ["100K<n<1M"]
elif total >= 1000000 and total < 10000000:
infos[keys] = ["1M<n<10M"]
elif total >= 10000000 and total < 100000000:
infos[keys] = ["10M<n<100M"]
elif total >= 100000000 and total < 1000000000:
infos[keys] = ["100M<n<1B"]
elif total >= 1000000000 and total < 10000000000:
infos[keys] = ["1B<n<10B"]
elif total >= 10000000000 and total < 100000000000:
infos[keys] = ["10B<n<100B"]
elif total >= 100000000000 and total < 1000000000000:
infos[keys] = ["100B<n<1T"]
else:
infos[keys] = ["n>1T"]
done_set = done_set.union(infos[keys])
if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):
print('-' * 30)
print(done_set)
print(f"Changing Full YAML for {dataset}")
print(OmegaConf.to_yaml(full_yaml))
if len(done_set) == 1:
full_yaml['size_categories'] = list(done_set)
else:
full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])
full_yaml_string = OmegaConf.to_yaml(full_yaml)
print('-' * 30)
print(full_yaml_string)
inp = input('Do you wish to continue?(Y/N)')
if inp == 'Y':
with open(f'./datasets/{dataset}/README.md', 'w') as f:
f.write('---\n')
f.write(full_yaml_string)
f.write('---')
f.write(rest_text)
else:
break
```
Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.
EDIT:
It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
EDIT:
I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too. | 37 | text: Fix size categories in YAML Tags
This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for dataset in sorted(os.listdir('./datasets/')):
if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:
infos = {}
stats = {}
st = ''
with open(f'datasets/{dataset}/README.md') as f:
d = f.read()
start_dash = d.find('---') + 3
end_dash = d[start_dash:].find('---') + 3
rest_text = d[end_dash + 3:]
try:
full_yaml = OmegaConf.create(d[start_dash:end_dash])
readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)
except Exception as e:
print(e)
continue
try:
with open(f'datasets/{dataset}/dataset_infos.json') as f:
data = json.load(f)
except Exception as e:
print(e)
continue # Skip those without infos.
done_set = set([])
num_keys = len(data.keys())
for keys in data:
# dataset = load_dataset('opus100', f'{dirs}')
total = 0
for split in data[keys]['splits']:
total = total + data[keys]['splits'][split]['num_examples']
if total < 1000:
st += "- n<1K" + '\n'
infos[keys] = ["n<1K"]
elif total >= 1000 and total < 10000:
infos[keys] = ["1K<n<10K"]
elif total >= 10000 and total < 100000:
infos[keys] = ["10K<n<100K"]
elif total >= 100000 and total < 1000000:
infos[keys] = ["100K<n<1M"]
elif total >= 1000000 and total < 10000000:
infos[keys] = ["1M<n<10M"]
elif total >= 10000000 and total < 100000000:
infos[keys] = ["10M<n<100M"]
elif total >= 100000000 and total < 1000000000:
infos[keys] = ["100M<n<1B"]
elif total >= 1000000000 and total < 10000000000:
infos[keys] = ["1B<n<10B"]
elif total >= 10000000000 and total < 100000000000:
infos[keys] = ["10B<n<100B"]
elif total >= 100000000000 and total < 1000000000000:
infos[keys] = ["100B<n<1T"]
else:
infos[keys] = ["n>1T"]
done_set = done_set.union(infos[keys])
if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):
print('-' * 30)
print(done_set)
print(f"Changing Full YAML for {dataset}")
print(OmegaConf.to_yaml(full_yaml))
if len(done_set) == 1:
full_yaml['size_categories'] = list(done_set)
else:
full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])
full_yaml_string = OmegaConf.to_yaml(full_yaml)
print('-' * 30)
print(full_yaml_string)
inp = input('Do you wish to continue?(Y/N)')
if inp == 'Y':
with open(f'./datasets/{dataset}/README.md', 'w') as f:
f.write('---\n')
f.write(full_yaml_string)
f.write('---')
f.write(rest_text)
else:
break
```
Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.
EDIT:
It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
EDIT:
I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too.
Good idea thank you ! Can you open a PR on datasets-tagging for the tasks as well ?
Also you can update the dataset card with the new tasks categories in another PR if you don't mind |
https://github.com/huggingface/datasets/pull/2074 | Fix size categories in YAML Tags | We can merge this one once the PR on dataset sizes is merged on `datasets-tagging` ;) | This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for dataset in sorted(os.listdir('./datasets/')):
if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:
infos = {}
stats = {}
st = ''
with open(f'datasets/{dataset}/README.md') as f:
d = f.read()
start_dash = d.find('---') + 3
end_dash = d[start_dash:].find('---') + 3
rest_text = d[end_dash + 3:]
try:
full_yaml = OmegaConf.create(d[start_dash:end_dash])
readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)
except Exception as e:
print(e)
continue
try:
with open(f'datasets/{dataset}/dataset_infos.json') as f:
data = json.load(f)
except Exception as e:
print(e)
continue # Skip those without infos.
done_set = set([])
num_keys = len(data.keys())
for keys in data:
# dataset = load_dataset('opus100', f'{dirs}')
total = 0
for split in data[keys]['splits']:
total = total + data[keys]['splits'][split]['num_examples']
if total < 1000:
st += "- n<1K" + '\n'
infos[keys] = ["n<1K"]
elif total >= 1000 and total < 10000:
infos[keys] = ["1K<n<10K"]
elif total >= 10000 and total < 100000:
infos[keys] = ["10K<n<100K"]
elif total >= 100000 and total < 1000000:
infos[keys] = ["100K<n<1M"]
elif total >= 1000000 and total < 10000000:
infos[keys] = ["1M<n<10M"]
elif total >= 10000000 and total < 100000000:
infos[keys] = ["10M<n<100M"]
elif total >= 100000000 and total < 1000000000:
infos[keys] = ["100M<n<1B"]
elif total >= 1000000000 and total < 10000000000:
infos[keys] = ["1B<n<10B"]
elif total >= 10000000000 and total < 100000000000:
infos[keys] = ["10B<n<100B"]
elif total >= 100000000000 and total < 1000000000000:
infos[keys] = ["100B<n<1T"]
else:
infos[keys] = ["n>1T"]
done_set = done_set.union(infos[keys])
if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):
print('-' * 30)
print(done_set)
print(f"Changing Full YAML for {dataset}")
print(OmegaConf.to_yaml(full_yaml))
if len(done_set) == 1:
full_yaml['size_categories'] = list(done_set)
else:
full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])
full_yaml_string = OmegaConf.to_yaml(full_yaml)
print('-' * 30)
print(full_yaml_string)
inp = input('Do you wish to continue?(Y/N)')
if inp == 'Y':
with open(f'./datasets/{dataset}/README.md', 'w') as f:
f.write('---\n')
f.write(full_yaml_string)
f.write('---')
f.write(rest_text)
else:
break
```
Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.
EDIT:
It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
EDIT:
I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too. | 16 | text: Fix size categories in YAML Tags
This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for dataset in sorted(os.listdir('./datasets/')):
if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:
infos = {}
stats = {}
st = ''
with open(f'datasets/{dataset}/README.md') as f:
d = f.read()
start_dash = d.find('---') + 3
end_dash = d[start_dash:].find('---') + 3
rest_text = d[end_dash + 3:]
try:
full_yaml = OmegaConf.create(d[start_dash:end_dash])
readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)
except Exception as e:
print(e)
continue
try:
with open(f'datasets/{dataset}/dataset_infos.json') as f:
data = json.load(f)
except Exception as e:
print(e)
continue # Skip those without infos.
done_set = set([])
num_keys = len(data.keys())
for keys in data:
# dataset = load_dataset('opus100', f'{dirs}')
total = 0
for split in data[keys]['splits']:
total = total + data[keys]['splits'][split]['num_examples']
if total < 1000:
st += "- n<1K" + '\n'
infos[keys] = ["n<1K"]
elif total >= 1000 and total < 10000:
infos[keys] = ["1K<n<10K"]
elif total >= 10000 and total < 100000:
infos[keys] = ["10K<n<100K"]
elif total >= 100000 and total < 1000000:
infos[keys] = ["100K<n<1M"]
elif total >= 1000000 and total < 10000000:
infos[keys] = ["1M<n<10M"]
elif total >= 10000000 and total < 100000000:
infos[keys] = ["10M<n<100M"]
elif total >= 100000000 and total < 1000000000:
infos[keys] = ["100M<n<1B"]
elif total >= 1000000000 and total < 10000000000:
infos[keys] = ["1B<n<10B"]
elif total >= 10000000000 and total < 100000000000:
infos[keys] = ["10B<n<100B"]
elif total >= 100000000000 and total < 1000000000000:
infos[keys] = ["100B<n<1T"]
else:
infos[keys] = ["n>1T"]
done_set = done_set.union(infos[keys])
if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):
print('-' * 30)
print(done_set)
print(f"Changing Full YAML for {dataset}")
print(OmegaConf.to_yaml(full_yaml))
if len(done_set) == 1:
full_yaml['size_categories'] = list(done_set)
else:
full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])
full_yaml_string = OmegaConf.to_yaml(full_yaml)
print('-' * 30)
print(full_yaml_string)
inp = input('Do you wish to continue?(Y/N)')
if inp == 'Y':
with open(f'./datasets/{dataset}/README.md', 'w') as f:
f.write('---\n')
f.write(full_yaml_string)
f.write('---')
f.write(rest_text)
else:
break
```
Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.
EDIT:
It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
EDIT:
I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too.
We can merge this one once the PR on dataset sizes is merged on `datasets-tagging` ;) |
https://github.com/huggingface/datasets/pull/2074 | Fix size categories in YAML Tags | Hi @lhoestq,
One problem with this approach is that for datasets like `ccaligned_multilingual`, the infos won't be complete because we don't have all configs. In that case, people might face trouble finding the datatset using the tag. Although, they probably won't be checking the size tag for a dataset like that.
What do you think?
CC @theo-m | This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for dataset in sorted(os.listdir('./datasets/')):
if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:
infos = {}
stats = {}
st = ''
with open(f'datasets/{dataset}/README.md') as f:
d = f.read()
start_dash = d.find('---') + 3
end_dash = d[start_dash:].find('---') + 3
rest_text = d[end_dash + 3:]
try:
full_yaml = OmegaConf.create(d[start_dash:end_dash])
readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)
except Exception as e:
print(e)
continue
try:
with open(f'datasets/{dataset}/dataset_infos.json') as f:
data = json.load(f)
except Exception as e:
print(e)
continue # Skip those without infos.
done_set = set([])
num_keys = len(data.keys())
for keys in data:
# dataset = load_dataset('opus100', f'{dirs}')
total = 0
for split in data[keys]['splits']:
total = total + data[keys]['splits'][split]['num_examples']
if total < 1000:
st += "- n<1K" + '\n'
infos[keys] = ["n<1K"]
elif total >= 1000 and total < 10000:
infos[keys] = ["1K<n<10K"]
elif total >= 10000 and total < 100000:
infos[keys] = ["10K<n<100K"]
elif total >= 100000 and total < 1000000:
infos[keys] = ["100K<n<1M"]
elif total >= 1000000 and total < 10000000:
infos[keys] = ["1M<n<10M"]
elif total >= 10000000 and total < 100000000:
infos[keys] = ["10M<n<100M"]
elif total >= 100000000 and total < 1000000000:
infos[keys] = ["100M<n<1B"]
elif total >= 1000000000 and total < 10000000000:
infos[keys] = ["1B<n<10B"]
elif total >= 10000000000 and total < 100000000000:
infos[keys] = ["10B<n<100B"]
elif total >= 100000000000 and total < 1000000000000:
infos[keys] = ["100B<n<1T"]
else:
infos[keys] = ["n>1T"]
done_set = done_set.union(infos[keys])
if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):
print('-' * 30)
print(done_set)
print(f"Changing Full YAML for {dataset}")
print(OmegaConf.to_yaml(full_yaml))
if len(done_set) == 1:
full_yaml['size_categories'] = list(done_set)
else:
full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])
full_yaml_string = OmegaConf.to_yaml(full_yaml)
print('-' * 30)
print(full_yaml_string)
inp = input('Do you wish to continue?(Y/N)')
if inp == 'Y':
with open(f'./datasets/{dataset}/README.md', 'w') as f:
f.write('---\n')
f.write(full_yaml_string)
f.write('---')
f.write(rest_text)
else:
break
```
Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.
EDIT:
It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
EDIT:
I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too. | 57 | text: Fix size categories in YAML Tags
This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for dataset in sorted(os.listdir('./datasets/')):
if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:
infos = {}
stats = {}
st = ''
with open(f'datasets/{dataset}/README.md') as f:
d = f.read()
start_dash = d.find('---') + 3
end_dash = d[start_dash:].find('---') + 3
rest_text = d[end_dash + 3:]
try:
full_yaml = OmegaConf.create(d[start_dash:end_dash])
readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)
except Exception as e:
print(e)
continue
try:
with open(f'datasets/{dataset}/dataset_infos.json') as f:
data = json.load(f)
except Exception as e:
print(e)
continue # Skip those without infos.
done_set = set([])
num_keys = len(data.keys())
for keys in data:
# dataset = load_dataset('opus100', f'{dirs}')
total = 0
for split in data[keys]['splits']:
total = total + data[keys]['splits'][split]['num_examples']
if total < 1000:
st += "- n<1K" + '\n'
infos[keys] = ["n<1K"]
elif total >= 1000 and total < 10000:
infos[keys] = ["1K<n<10K"]
elif total >= 10000 and total < 100000:
infos[keys] = ["10K<n<100K"]
elif total >= 100000 and total < 1000000:
infos[keys] = ["100K<n<1M"]
elif total >= 1000000 and total < 10000000:
infos[keys] = ["1M<n<10M"]
elif total >= 10000000 and total < 100000000:
infos[keys] = ["10M<n<100M"]
elif total >= 100000000 and total < 1000000000:
infos[keys] = ["100M<n<1B"]
elif total >= 1000000000 and total < 10000000000:
infos[keys] = ["1B<n<10B"]
elif total >= 10000000000 and total < 100000000000:
infos[keys] = ["10B<n<100B"]
elif total >= 100000000000 and total < 1000000000000:
infos[keys] = ["100B<n<1T"]
else:
infos[keys] = ["n>1T"]
done_set = done_set.union(infos[keys])
if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):
print('-' * 30)
print(done_set)
print(f"Changing Full YAML for {dataset}")
print(OmegaConf.to_yaml(full_yaml))
if len(done_set) == 1:
full_yaml['size_categories'] = list(done_set)
else:
full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])
full_yaml_string = OmegaConf.to_yaml(full_yaml)
print('-' * 30)
print(full_yaml_string)
inp = input('Do you wish to continue?(Y/N)')
if inp == 'Y':
with open(f'./datasets/{dataset}/README.md', 'w') as f:
f.write('---\n')
f.write(full_yaml_string)
f.write('---')
f.write(rest_text)
else:
break
```
Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.
EDIT:
It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
EDIT:
I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too.
Hi @lhoestq,
One problem with this approach is that for datasets like `ccaligned_multilingual`, the infos won't be complete because we don't have all configs. In that case, people might face trouble finding the datatset using the tag. Although, they probably won't be checking the size tag for a dataset like that.
What do you think?
CC @theo-m |
https://github.com/huggingface/datasets/pull/2074 | Fix size categories in YAML Tags | For datasets like `ccaligned_multilingual` it's important to have all the tags for users to search and find it. Currently is has the full list of tags (without the config names). So you can actually find the dataset, but you don't know what tag correspond to what configuration. | This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for dataset in sorted(os.listdir('./datasets/')):
if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:
infos = {}
stats = {}
st = ''
with open(f'datasets/{dataset}/README.md') as f:
d = f.read()
start_dash = d.find('---') + 3
end_dash = d[start_dash:].find('---') + 3
rest_text = d[end_dash + 3:]
try:
full_yaml = OmegaConf.create(d[start_dash:end_dash])
readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)
except Exception as e:
print(e)
continue
try:
with open(f'datasets/{dataset}/dataset_infos.json') as f:
data = json.load(f)
except Exception as e:
print(e)
continue # Skip those without infos.
done_set = set([])
num_keys = len(data.keys())
for keys in data:
# dataset = load_dataset('opus100', f'{dirs}')
total = 0
for split in data[keys]['splits']:
total = total + data[keys]['splits'][split]['num_examples']
if total < 1000:
st += "- n<1K" + '\n'
infos[keys] = ["n<1K"]
elif total >= 1000 and total < 10000:
infos[keys] = ["1K<n<10K"]
elif total >= 10000 and total < 100000:
infos[keys] = ["10K<n<100K"]
elif total >= 100000 and total < 1000000:
infos[keys] = ["100K<n<1M"]
elif total >= 1000000 and total < 10000000:
infos[keys] = ["1M<n<10M"]
elif total >= 10000000 and total < 100000000:
infos[keys] = ["10M<n<100M"]
elif total >= 100000000 and total < 1000000000:
infos[keys] = ["100M<n<1B"]
elif total >= 1000000000 and total < 10000000000:
infos[keys] = ["1B<n<10B"]
elif total >= 10000000000 and total < 100000000000:
infos[keys] = ["10B<n<100B"]
elif total >= 100000000000 and total < 1000000000000:
infos[keys] = ["100B<n<1T"]
else:
infos[keys] = ["n>1T"]
done_set = done_set.union(infos[keys])
if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):
print('-' * 30)
print(done_set)
print(f"Changing Full YAML for {dataset}")
print(OmegaConf.to_yaml(full_yaml))
if len(done_set) == 1:
full_yaml['size_categories'] = list(done_set)
else:
full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])
full_yaml_string = OmegaConf.to_yaml(full_yaml)
print('-' * 30)
print(full_yaml_string)
inp = input('Do you wish to continue?(Y/N)')
if inp == 'Y':
with open(f'./datasets/{dataset}/README.md', 'w') as f:
f.write('---\n')
f.write(full_yaml_string)
f.write('---')
f.write(rest_text)
else:
break
```
Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.
EDIT:
It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
EDIT:
I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too. | 47 | text: Fix size categories in YAML Tags
This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for dataset in sorted(os.listdir('./datasets/')):
if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:
infos = {}
stats = {}
st = ''
with open(f'datasets/{dataset}/README.md') as f:
d = f.read()
start_dash = d.find('---') + 3
end_dash = d[start_dash:].find('---') + 3
rest_text = d[end_dash + 3:]
try:
full_yaml = OmegaConf.create(d[start_dash:end_dash])
readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)
except Exception as e:
print(e)
continue
try:
with open(f'datasets/{dataset}/dataset_infos.json') as f:
data = json.load(f)
except Exception as e:
print(e)
continue # Skip those without infos.
done_set = set([])
num_keys = len(data.keys())
for keys in data:
# dataset = load_dataset('opus100', f'{dirs}')
total = 0
for split in data[keys]['splits']:
total = total + data[keys]['splits'][split]['num_examples']
if total < 1000:
st += "- n<1K" + '\n'
infos[keys] = ["n<1K"]
elif total >= 1000 and total < 10000:
infos[keys] = ["1K<n<10K"]
elif total >= 10000 and total < 100000:
infos[keys] = ["10K<n<100K"]
elif total >= 100000 and total < 1000000:
infos[keys] = ["100K<n<1M"]
elif total >= 1000000 and total < 10000000:
infos[keys] = ["1M<n<10M"]
elif total >= 10000000 and total < 100000000:
infos[keys] = ["10M<n<100M"]
elif total >= 100000000 and total < 1000000000:
infos[keys] = ["100M<n<1B"]
elif total >= 1000000000 and total < 10000000000:
infos[keys] = ["1B<n<10B"]
elif total >= 10000000000 and total < 100000000000:
infos[keys] = ["10B<n<100B"]
elif total >= 100000000000 and total < 1000000000000:
infos[keys] = ["100B<n<1T"]
else:
infos[keys] = ["n>1T"]
done_set = done_set.union(infos[keys])
if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):
print('-' * 30)
print(done_set)
print(f"Changing Full YAML for {dataset}")
print(OmegaConf.to_yaml(full_yaml))
if len(done_set) == 1:
full_yaml['size_categories'] = list(done_set)
else:
full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])
full_yaml_string = OmegaConf.to_yaml(full_yaml)
print('-' * 30)
print(full_yaml_string)
inp = input('Do you wish to continue?(Y/N)')
if inp == 'Y':
with open(f'./datasets/{dataset}/README.md', 'w') as f:
f.write('---\n')
f.write(full_yaml_string)
f.write('---')
f.write(rest_text)
else:
break
```
Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.
EDIT:
It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
EDIT:
I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too.
For datasets like `ccaligned_multilingual` it's important to have all the tags for users to search and find it. Currently is has the full list of tags (without the config names). So you can actually find the dataset, but you don't know what tag correspond to what configuration. |
https://github.com/huggingface/datasets/pull/2072 | Fix docstring issues | I think I will stop pushing to this PR, so that it can me merged for today release.
I will open another PR for further fixing docs.
Do you agree, @lhoestq ? | Fix docstring issues. | 32 | text: Fix docstring issues
Fix docstring issues.
I think I will stop pushing to this PR, so that it can me merged for today release.
I will open another PR for further fixing docs.
Do you agree, @lhoestq ? |
https://github.com/huggingface/datasets/pull/2060 | Filtering refactor | I thought at first that the multiproc test was not relevant now that we do stuff only in memory, but I think there's something that's actually broken, my tiny benchmark on bookcorpus runs forever (2hrs+) when I add `num_proc=4` as a kwarg, will investigate ๐
I'm not familiar with the caching you describe for `.map`, I'll look it up. | fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds. | 59 | text: Filtering refactor
fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds.
I thought at first that the multiproc test was not relevant now that we do stuff only in memory, but I think there's something that's actually broken, my tiny benchmark on bookcorpus runs forever (2hrs+) when I add `num_proc=4` as a kwarg, will investigate ๐
I'm not familiar with the caching you describe for `.map`, I'll look it up. |
https://github.com/huggingface/datasets/pull/2060 | Filtering refactor | turns out the multi proc issue is also on master, I won't fix it in this PR but opened #2071 to track the problem. | fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds. | 24 | text: Filtering refactor
fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds.
turns out the multi proc issue is also on master, I won't fix it in this PR but opened #2071 to track the problem. |
https://github.com/huggingface/datasets/pull/2060 | Filtering refactor | tracemalloc outputs from this script:
```python
import logging
import sys
import time
import tracemalloc
from datasets import load_dataset, set_caching_enabled
if __name__ == "__main__":
set_caching_enabled(False)
logging.basicConfig(level=logging.DEBUG)
tracemalloc.start()
bc = load_dataset("bookcorpus")
now = time.time()
try:
snapshot1 = tracemalloc.take_snapshot()
bc["train"].filter(lambda x: len(x["text"]) < 64, num_proc=int(sys.argv[1]))
except Exception as e:
print(f"cancelled: {e}")
exit(1)
snapshot2 = tracemalloc.take_snapshot()
tracemalloc.stop()
elapsed = time.time() - now
print(elapsed)
top_stats = snapshot2.compare_to(snapshot1, "lineno")
print("[ Top 10 differences ]")
for stat in top_stats[:10]:
print(stat)
```
This branch:
```
ssh://theo@35.205.12.130:22/home/theo/.local/share/miniconda3/envs/datasets/bin/python -u benchmark_filter.py 1
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "HEAD /datasets.huggingface.co/datasets/datasets/bookcorpus/bookcorpus.py HTTP/1.1" 200 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443
DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/master/datasets/bookcorpus/bookcorpus.py HTTP/1.1" 200 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443
DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/master/datasets/bookcorpus/dataset_infos.json HTTP/1.1" 200 0
WARNING:datasets.builder:Reusing dataset bookcorpus (/home/theo/.cache/huggingface/datasets/bookcorpus/plain_text/1.0.0/af844be26c089fb64810e9f2cd841954fd8bd596d6ddd26326e4c70e2b8c96fc)
0%| | 0/74005 [00:00<?, ?ba/s]2021-03-23 10:23:20.051255: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-03-23 10:23:20.051304: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 74005/74005 [12:16<00:00, 100.54ba/s]
815.6356580257416
[ Top 10 differences ]
<frozen importlib._bootstrap_external>:580: size=38.0 MiB (+33.7 MiB), count=326226 (+307928), average=122 B
<frozen importlib._bootstrap>:219: size=7643 KiB (+7553 KiB), count=26372 (+25473), average=297 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/torch/__init__.py:427: size=1291 KiB (+1291 KiB), count=5924 (+5924), average=223 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/abc.py:85: size=1039 KiB (+1026 KiB), count=3428 (+3384), average=310 B
<frozen importlib._bootstrap_external>:64: size=917 KiB (+891 KiB), count=5300 (+5132), average=177 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/collections/__init__.py:456: size=720 KiB (+709 KiB), count=3403 (+3349), average=217 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_export.py:346: size=607 KiB (+607 KiB), count=3962 (+3962), average=157 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/linecache.py:137: size=998 KiB (+487 KiB), count=9551 (+4517), average=107 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_decorator.py:241: size=367 KiB (+367 KiB), count=5225 (+5225), average=72 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/decorator_utils.py:114: size=359 KiB (+359 KiB), count=330 (+330), average=1114 B
```
On master:
```
ssh://theo@35.205.12.130:22/home/theo/.local/share/miniconda3/envs/datasets/bin/python -u benchmark_filter.py 1
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "HEAD /datasets.huggingface.co/datasets/datasets/bookcorpus/bookcorpus.py HTTP/1.1" 200 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443
DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/master/datasets/bookcorpus/bookcorpus.py HTTP/1.1" 200 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443
DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/master/datasets/bookcorpus/dataset_infos.json HTTP/1.1" 200 0
WARNING:datasets.builder:Reusing dataset bookcorpus (/home/theo/.cache/huggingface/datasets/bookcorpus/plain_text/1.0.0/af844be26c089fb64810e9f2cd841954fd8bd596d6ddd26326e4c70e2b8c96fc)
0%| | 0/74005 [00:00<?, ?ba/s]2021-03-23 12:26:47.219622: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-03-23 12:26:47.219669: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 74005/74005 [1:02:17<00:00, 19.80ba/s]
3738.870892047882
[ Top 10 differences ]
<frozen importlib._bootstrap_external>:580: size=38.0 MiB (+33.7 MiB), count=326221 (+307919), average=122 B
<frozen importlib._bootstrap>:219: size=7648 KiB (+7557 KiB), count=26455 (+25555), average=296 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/torch/__init__.py:427: size=1291 KiB (+1291 KiB), count=5924 (+5924), average=223 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/abc.py:85: size=1039 KiB (+1026 KiB), count=3429 (+3385), average=310 B
<frozen importlib._bootstrap_external>:64: size=917 KiB (+891 KiB), count=5300 (+5132), average=177 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/collections/__init__.py:456: size=720 KiB (+709 KiB), count=3403 (+3349), average=217 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_export.py:346: size=607 KiB (+607 KiB), count=3962 (+3962), average=157 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/linecache.py:137: size=1000 KiB (+489 KiB), count=9569 (+4535), average=107 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_decorator.py:241: size=367 KiB (+367 KiB), count=5225 (+5225), average=72 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/decorator_utils.py:114: size=359 KiB (+359 KiB), count=330 (+330), average=1114 B
```
I'm not concluding much, it seems nothing is really happening to memory on `pyarrow::Table.filter`? | fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds. | 518 | text: Filtering refactor
fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds.
tracemalloc outputs from this script:
```python
import logging
import sys
import time
import tracemalloc
from datasets import load_dataset, set_caching_enabled
if __name__ == "__main__":
set_caching_enabled(False)
logging.basicConfig(level=logging.DEBUG)
tracemalloc.start()
bc = load_dataset("bookcorpus")
now = time.time()
try:
snapshot1 = tracemalloc.take_snapshot()
bc["train"].filter(lambda x: len(x["text"]) < 64, num_proc=int(sys.argv[1]))
except Exception as e:
print(f"cancelled: {e}")
exit(1)
snapshot2 = tracemalloc.take_snapshot()
tracemalloc.stop()
elapsed = time.time() - now
print(elapsed)
top_stats = snapshot2.compare_to(snapshot1, "lineno")
print("[ Top 10 differences ]")
for stat in top_stats[:10]:
print(stat)
```
This branch:
```
ssh://theo@35.205.12.130:22/home/theo/.local/share/miniconda3/envs/datasets/bin/python -u benchmark_filter.py 1
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "HEAD /datasets.huggingface.co/datasets/datasets/bookcorpus/bookcorpus.py HTTP/1.1" 200 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443
DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/master/datasets/bookcorpus/bookcorpus.py HTTP/1.1" 200 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443
DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/master/datasets/bookcorpus/dataset_infos.json HTTP/1.1" 200 0
WARNING:datasets.builder:Reusing dataset bookcorpus (/home/theo/.cache/huggingface/datasets/bookcorpus/plain_text/1.0.0/af844be26c089fb64810e9f2cd841954fd8bd596d6ddd26326e4c70e2b8c96fc)
0%| | 0/74005 [00:00<?, ?ba/s]2021-03-23 10:23:20.051255: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-03-23 10:23:20.051304: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 74005/74005 [12:16<00:00, 100.54ba/s]
815.6356580257416
[ Top 10 differences ]
<frozen importlib._bootstrap_external>:580: size=38.0 MiB (+33.7 MiB), count=326226 (+307928), average=122 B
<frozen importlib._bootstrap>:219: size=7643 KiB (+7553 KiB), count=26372 (+25473), average=297 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/torch/__init__.py:427: size=1291 KiB (+1291 KiB), count=5924 (+5924), average=223 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/abc.py:85: size=1039 KiB (+1026 KiB), count=3428 (+3384), average=310 B
<frozen importlib._bootstrap_external>:64: size=917 KiB (+891 KiB), count=5300 (+5132), average=177 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/collections/__init__.py:456: size=720 KiB (+709 KiB), count=3403 (+3349), average=217 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_export.py:346: size=607 KiB (+607 KiB), count=3962 (+3962), average=157 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/linecache.py:137: size=998 KiB (+487 KiB), count=9551 (+4517), average=107 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_decorator.py:241: size=367 KiB (+367 KiB), count=5225 (+5225), average=72 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/decorator_utils.py:114: size=359 KiB (+359 KiB), count=330 (+330), average=1114 B
```
On master:
```
ssh://theo@35.205.12.130:22/home/theo/.local/share/miniconda3/envs/datasets/bin/python -u benchmark_filter.py 1
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "HEAD /datasets.huggingface.co/datasets/datasets/bookcorpus/bookcorpus.py HTTP/1.1" 200 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443
DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/master/datasets/bookcorpus/bookcorpus.py HTTP/1.1" 200 0
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443
DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/master/datasets/bookcorpus/dataset_infos.json HTTP/1.1" 200 0
WARNING:datasets.builder:Reusing dataset bookcorpus (/home/theo/.cache/huggingface/datasets/bookcorpus/plain_text/1.0.0/af844be26c089fb64810e9f2cd841954fd8bd596d6ddd26326e4c70e2b8c96fc)
0%| | 0/74005 [00:00<?, ?ba/s]2021-03-23 12:26:47.219622: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-03-23 12:26:47.219669: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 74005/74005 [1:02:17<00:00, 19.80ba/s]
3738.870892047882
[ Top 10 differences ]
<frozen importlib._bootstrap_external>:580: size=38.0 MiB (+33.7 MiB), count=326221 (+307919), average=122 B
<frozen importlib._bootstrap>:219: size=7648 KiB (+7557 KiB), count=26455 (+25555), average=296 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/torch/__init__.py:427: size=1291 KiB (+1291 KiB), count=5924 (+5924), average=223 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/abc.py:85: size=1039 KiB (+1026 KiB), count=3429 (+3385), average=310 B
<frozen importlib._bootstrap_external>:64: size=917 KiB (+891 KiB), count=5300 (+5132), average=177 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/collections/__init__.py:456: size=720 KiB (+709 KiB), count=3403 (+3349), average=217 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_export.py:346: size=607 KiB (+607 KiB), count=3962 (+3962), average=157 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/linecache.py:137: size=1000 KiB (+489 KiB), count=9569 (+4535), average=107 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_decorator.py:241: size=367 KiB (+367 KiB), count=5225 (+5225), average=72 B
/home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/decorator_utils.py:114: size=359 KiB (+359 KiB), count=330 (+330), average=1114 B
```
I'm not concluding much, it seems nothing is really happening to memory on `pyarrow::Table.filter`? |
https://github.com/huggingface/datasets/pull/2060 | Filtering refactor | Cool ! Maybe it increases the memory a bit but what's brought in memory is not the resulting Table but something else (not sure what though).
What's the length of the resulting dataset ?
You can also take a look at `pyarrow.total_allocated_memory()` to show how much memory is being used by pyarrow | fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds. | 52 | text: Filtering refactor
fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds.
Cool ! Maybe it increases the memory a bit but what's brought in memory is not the resulting Table but something else (not sure what though).
What's the length of the resulting dataset ?
You can also take a look at `pyarrow.total_allocated_memory()` to show how much memory is being used by pyarrow |
https://github.com/huggingface/datasets/pull/2060 | Filtering refactor | ```diff
diff --git a/benchmarks/benchmark_filter.py b/benchmarks/benchmark_filter.py
index 4b9efd4e..a862c204 100644
--- a/benchmarks/benchmark_filter.py
+++ b/benchmarks/benchmark_filter.py
@@ -1,6 +1,9 @@
import logging
import sys
import time
+import tracemalloc
+
+import pyarrow as pa
from datasets import load_dataset, set_caching_enabled
@@ -9,13 +12,28 @@ if __name__ == "__main__":
set_caching_enabled(False)
logging.basicConfig(level=logging.DEBUG)
+ tracemalloc.start()
bc = load_dataset("bookcorpus")
now = time.time()
try:
+ snapshot1 = tracemalloc.take_snapshot()
+ pamem1 = pa.total_allocated_bytes()
bc["train"].filter(lambda x: len(x["text"]) < 64, num_proc=int(sys.argv[1]))
+ pamem2 = pa.total_allocated_bytes()
+ snapshot2 = tracemalloc.take_snapshot()
except Exception as e:
print(f"cancelled: {e}")
+ exit(1)
+ tracemalloc.stop()
elapsed = time.time() - now
print(elapsed)
+ top_stats = snapshot2.compare_to(snapshot1, "lineno")
+
+ print("[ Top 10 differences ]")
+ for stat in top_stats[:10]:
+ print(stat)
+
+ print("[ pyarrow reporting ]")
+ print(f"before: ({pamem1}) after: ({pamem2})")
```
this yields 0-0, does not seem like a good tool ๐ and the documentation is [quite mysterious.](https://arrow.apache.org/docs/python/generated/pyarrow.total_allocated_bytes.html) | fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds. | 139 | text: Filtering refactor
fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds.
```diff
diff --git a/benchmarks/benchmark_filter.py b/benchmarks/benchmark_filter.py
index 4b9efd4e..a862c204 100644
--- a/benchmarks/benchmark_filter.py
+++ b/benchmarks/benchmark_filter.py
@@ -1,6 +1,9 @@
import logging
import sys
import time
+import tracemalloc
+
+import pyarrow as pa
from datasets import load_dataset, set_caching_enabled
@@ -9,13 +12,28 @@ if __name__ == "__main__":
set_caching_enabled(False)
logging.basicConfig(level=logging.DEBUG)
+ tracemalloc.start()
bc = load_dataset("bookcorpus")
now = time.time()
try:
+ snapshot1 = tracemalloc.take_snapshot()
+ pamem1 = pa.total_allocated_bytes()
bc["train"].filter(lambda x: len(x["text"]) < 64, num_proc=int(sys.argv[1]))
+ pamem2 = pa.total_allocated_bytes()
+ snapshot2 = tracemalloc.take_snapshot()
except Exception as e:
print(f"cancelled: {e}")
+ exit(1)
+ tracemalloc.stop()
elapsed = time.time() - now
print(elapsed)
+ top_stats = snapshot2.compare_to(snapshot1, "lineno")
+
+ print("[ Top 10 differences ]")
+ for stat in top_stats[:10]:
+ print(stat)
+
+ print("[ pyarrow reporting ]")
+ print(f"before: ({pamem1}) after: ({pamem2})")
```
this yields 0-0, does not seem like a good tool ๐ and the documentation is [quite mysterious.](https://arrow.apache.org/docs/python/generated/pyarrow.total_allocated_bytes.html) |
https://github.com/huggingface/datasets/pull/2060 | Filtering refactor | Personally if I use your script to benchmark on this branch
```python
bc = load_dataset("bookcorpus", split="train[:1%]")
bc = bc.filter(lambda x: len(x["text"]) < 64)
```
then I get
```
[ pyarrow reporting ]
before: (0) after: (15300672)
```
Maybe you got 0-0 because the filter output is directly garbage collected, since you didn't do
```python
bc["train"] = bc["train"].filter(...)
```
Can you try again on your side just to make sure ?
Even if the documentation doesn't say much, `pa.total_allocated_bytes` if pretty useful, and also very consistent.
It tracks the number of bytes used for arrow data. | fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds. | 95 | text: Filtering refactor
fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds.
Personally if I use your script to benchmark on this branch
```python
bc = load_dataset("bookcorpus", split="train[:1%]")
bc = bc.filter(lambda x: len(x["text"]) < 64)
```
then I get
```
[ pyarrow reporting ]
before: (0) after: (15300672)
```
Maybe you got 0-0 because the filter output is directly garbage collected, since you didn't do
```python
bc["train"] = bc["train"].filter(...)
```
Can you try again on your side just to make sure ?
Even if the documentation doesn't say much, `pa.total_allocated_bytes` if pretty useful, and also very consistent.
It tracks the number of bytes used for arrow data. |
https://github.com/huggingface/datasets/pull/2060 | Filtering refactor | > Maybe you got 0-0 because the filter output is directly garbage collected, since you didn't do
>
> ```python
> bc["train"] = bc["train"].filter(...)
> ```
Nice catch! I get 1.74GB for this branch | fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds. | 34 | text: Filtering refactor
fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds.
> Maybe you got 0-0 because the filter output is directly garbage collected, since you didn't do
>
> ```python
> bc["train"] = bc["train"].filter(...)
> ```
Nice catch! I get 1.74GB for this branch |
https://github.com/huggingface/datasets/pull/2060 | Filtering refactor | Looks like we may need to write the filtered table on the disk then.
The other option is to slice the table to keep only the good rows and concatenate them but this is too slow at the moment since slicing is O(n) until #1803 is fixed. I'll work on this issue this afternoon | fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds. | 54 | text: Filtering refactor
fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds.
Looks like we may need to write the filtered table on the disk then.
The other option is to slice the table to keep only the good rows and concatenate them but this is too slow at the moment since slicing is O(n) until #1803 is fixed. I'll work on this issue this afternoon |
https://github.com/huggingface/datasets/pull/2060 | Filtering refactor | From investigation it looks like the lib's `Table.filter` cannot send its output to memorymap, asked a question on the mailing list, see [here](https://lists.apache.org/thread.html/r8cd8591ce83a967eb0097a7f31785ac2f3ee95ea371c8c5beb0720ad%40%3Cuser.arrow.apache.org%3E) | fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds. | 23 | text: Filtering refactor
fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds.
From investigation it looks like the lib's `Table.filter` cannot send its output to memorymap, asked a question on the mailing list, see [here](https://lists.apache.org/thread.html/r8cd8591ce83a967eb0097a7f31785ac2f3ee95ea371c8c5beb0720ad%40%3Cuser.arrow.apache.org%3E) |
https://github.com/huggingface/datasets/pull/2053 | Add bAbI QA tasks | Hi @lhoestq,
Should I remove the 160 configurations? Is it too much?
EDIT:
Can you also check the task category? I'm not sure if there is an appropriate tag for the same. | - **Name:** *The (20) QA bAbI tasks*
- **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.*
- **Paper:** [arXiv](https://arxiv.org/pdf/1502.05698.pdf)
- **Data:** [Facebook Research Page](https://research.fb.com/downloads/babi/)
- **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research.
**Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done.
Thanks :)
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
| 32 | text: Add bAbI QA tasks
- **Name:** *The (20) QA bAbI tasks*
- **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.*
- **Paper:** [arXiv](https://arxiv.org/pdf/1502.05698.pdf)
- **Data:** [Facebook Research Page](https://research.fb.com/downloads/babi/)
- **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research.
**Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done.
Thanks :)
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
Hi @lhoestq,
Should I remove the 160 configurations? Is it too much?
EDIT:
Can you also check the task category? I'm not sure if there is an appropriate tag for the same. |
https://github.com/huggingface/datasets/pull/2053 | Add bAbI QA tasks | Thanks for the changes !
> Should I remove the 160 configurations? Is it too much?
Yea 160 configuration is a lot.
Maybe this dataset can work with parameters `type` and `task_no` ?
You can just remove the configuration in BUILDER_CONFIGS to only keep a few ones.
Also feel free to add an example in the dataset card of how to load the other configurations
```
load_dataset("babi_qa", type="hn", task_no="qa1")
```
for example, and with a list of the possible combinations.
> Can you also check the task category? I'm not sure if there is an appropriate tag for the same.
It looks appropriate, thanks :) | - **Name:** *The (20) QA bAbI tasks*
- **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.*
- **Paper:** [arXiv](https://arxiv.org/pdf/1502.05698.pdf)
- **Data:** [Facebook Research Page](https://research.fb.com/downloads/babi/)
- **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research.
**Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done.
Thanks :)
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
| 105 | text: Add bAbI QA tasks
- **Name:** *The (20) QA bAbI tasks*
- **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.*
- **Paper:** [arXiv](https://arxiv.org/pdf/1502.05698.pdf)
- **Data:** [Facebook Research Page](https://research.fb.com/downloads/babi/)
- **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research.
**Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done.
Thanks :)
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
Thanks for the changes !
> Should I remove the 160 configurations? Is it too much?
Yea 160 configuration is a lot.
Maybe this dataset can work with parameters `type` and `task_no` ?
You can just remove the configuration in BUILDER_CONFIGS to only keep a few ones.
Also feel free to add an example in the dataset card of how to load the other configurations
```
load_dataset("babi_qa", type="hn", task_no="qa1")
```
for example, and with a list of the possible combinations.
> Can you also check the task category? I'm not sure if there is an appropriate tag for the same.
It looks appropriate, thanks :) |
https://github.com/huggingface/datasets/pull/2053 | Add bAbI QA tasks | Hi @lhoestq
I'm unable to test it locally using:
```python
load_dataset("datasets/babi_qa", type="hn", task_no="qa1")
```
It raises an error:
```python
TypeError: __init__() got an unexpected keyword argument 'type'
```
Will this be possible only after merging? Or am I missing something here? | - **Name:** *The (20) QA bAbI tasks*
- **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.*
- **Paper:** [arXiv](https://arxiv.org/pdf/1502.05698.pdf)
- **Data:** [Facebook Research Page](https://research.fb.com/downloads/babi/)
- **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research.
**Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done.
Thanks :)
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
| 41 | text: Add bAbI QA tasks
- **Name:** *The (20) QA bAbI tasks*
- **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.*
- **Paper:** [arXiv](https://arxiv.org/pdf/1502.05698.pdf)
- **Data:** [Facebook Research Page](https://research.fb.com/downloads/babi/)
- **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research.
**Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done.
Thanks :)
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
Hi @lhoestq
I'm unable to test it locally using:
```python
load_dataset("datasets/babi_qa", type="hn", task_no="qa1")
```
It raises an error:
```python
TypeError: __init__() got an unexpected keyword argument 'type'
```
Will this be possible only after merging? Or am I missing something here? |
https://github.com/huggingface/datasets/pull/2053 | Add bAbI QA tasks | Can you try adding this class attribute to `BabiQa` ?
```python
BUILDER_CONFIG_CLASS = BabiQaConfig
```
This should fix the TypeError issue you got | - **Name:** *The (20) QA bAbI tasks*
- **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.*
- **Paper:** [arXiv](https://arxiv.org/pdf/1502.05698.pdf)
- **Data:** [Facebook Research Page](https://research.fb.com/downloads/babi/)
- **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research.
**Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done.
Thanks :)
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
| 23 | text: Add bAbI QA tasks
- **Name:** *The (20) QA bAbI tasks*
- **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.*
- **Paper:** [arXiv](https://arxiv.org/pdf/1502.05698.pdf)
- **Data:** [Facebook Research Page](https://research.fb.com/downloads/babi/)
- **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research.
**Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done.
Thanks :)
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
Can you try adding this class attribute to `BabiQa` ?
```python
BUILDER_CONFIG_CLASS = BabiQaConfig
```
This should fix the TypeError issue you got |
https://github.com/huggingface/datasets/pull/2053 | Add bAbI QA tasks | Hi @lhoestq
I have added the changes. Only the "qa1" task for each category is included. Also, I haven't removed the size categories and other description because I think it will still be useful. I have updated the line in README showing the example.
Thanks,
Gunjan | - **Name:** *The (20) QA bAbI tasks*
- **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.*
- **Paper:** [arXiv](https://arxiv.org/pdf/1502.05698.pdf)
- **Data:** [Facebook Research Page](https://research.fb.com/downloads/babi/)
- **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research.
**Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done.
Thanks :)
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
| 46 | text: Add bAbI QA tasks
- **Name:** *The (20) QA bAbI tasks*
- **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.*
- **Paper:** [arXiv](https://arxiv.org/pdf/1502.05698.pdf)
- **Data:** [Facebook Research Page](https://research.fb.com/downloads/babi/)
- **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research.
**Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done.
Thanks :)
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
Hi @lhoestq
I have added the changes. Only the "qa1" task for each category is included. Also, I haven't removed the size categories and other description because I think it will still be useful. I have updated the line in README showing the example.
Thanks,
Gunjan |
https://github.com/huggingface/datasets/pull/2047 | Multilingual dIalogAct benchMark (miam) | Once the review period is over, feel free to open a PR to add all the missing information ;) | My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over. | 19 | text: Multilingual dIalogAct benchMark (miam)
My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over.
Once the review period is over, feel free to open a PR to add all the missing information ;) |
https://github.com/huggingface/datasets/pull/2047 | Multilingual dIalogAct benchMark (miam) | Hi! I will follow up right now with one more pull request as I have new anonymous citation information to include. | My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over. | 21 | text: Multilingual dIalogAct benchMark (miam)
My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over.
Hi! I will follow up right now with one more pull request as I have new anonymous citation information to include. |
https://github.com/huggingface/datasets/pull/2045 | Preserve column ordering in Dataset.rename_column | I don't know how to trigger it manually, but an empty commit should do the job | Currently `Dataset.rename_column` doesn't necessarily preserve the order of the columns:
```python
>>> from datasets import Dataset
>>> d = Dataset.from_dict({'sentences': ["s1", "s2"], 'label': [0, 1]})
>>> d
Dataset({
features: ['sentences', 'label'],
num_rows: 2
})
>>> d.rename_column('sentences', 'text')
Dataset({
features: ['label', 'text'],
num_rows: 2
})
```
This PR fixes this. | 16 | text: Preserve column ordering in Dataset.rename_column
Currently `Dataset.rename_column` doesn't necessarily preserve the order of the columns:
```python
>>> from datasets import Dataset
>>> d = Dataset.from_dict({'sentences': ["s1", "s2"], 'label': [0, 1]})
>>> d
Dataset({
features: ['sentences', 'label'],
num_rows: 2
})
>>> d.rename_column('sentences', 'text')
Dataset({
features: ['label', 'text'],
num_rows: 2
})
```
This PR fixes this.
I don't know how to trigger it manually, but an empty commit should do the job |
https://github.com/huggingface/datasets/pull/2043 | Support pickle protocol for dataset splits defined as ReadInstruction | @lhoestq But we don't perform conversion to a `NamedSplit` if `_split` is not a string which means it **will** be a `ReadInstruction` after reloading. | Fixes #2022 (+ some style fixes) | 24 | text: Support pickle protocol for dataset splits defined as ReadInstruction
Fixes #2022 (+ some style fixes)
@lhoestq But we don't perform conversion to a `NamedSplit` if `_split` is not a string which means it **will** be a `ReadInstruction` after reloading. |
https://github.com/huggingface/datasets/pull/2037 | Fix: Wikipedia - save memory by replacing root.clear with elem.clear | The error you got is minor and appeared in the last version of pyarrow, we'll fix the CI to take this into account. You can ignore it | see: https://github.com/huggingface/datasets/issues/2031
What I did:
- replace root.clear with elem.clear
- remove lines to get root element
- $ make style
- $ make test
- some tests required some pip packages, I installed them.
test results on origin/master and my branch are same. I think it's not related on my modification, isn't it?
```
==================================================================================== short test summary info ====================================================================================
FAILED tests/test_arrow_writer.py::TypedSequenceTest::test_catch_overflow - AssertionError: OverflowError not raised
============================================================= 1 failed, 2332 passed, 5138 skipped, 70 warnings in 91.75s (0:01:31) ==============================================================
make: *** [Makefile:19: test] Error 1
```
Is there anything else I should do? | 27 | text: Fix: Wikipedia - save memory by replacing root.clear with elem.clear
see: https://github.com/huggingface/datasets/issues/2031
What I did:
- replace root.clear with elem.clear
- remove lines to get root element
- $ make style
- $ make test
- some tests required some pip packages, I installed them.
test results on origin/master and my branch are same. I think it's not related on my modification, isn't it?
```
==================================================================================== short test summary info ====================================================================================
FAILED tests/test_arrow_writer.py::TypedSequenceTest::test_catch_overflow - AssertionError: OverflowError not raised
============================================================= 1 failed, 2332 passed, 5138 skipped, 70 warnings in 91.75s (0:01:31) ==============================================================
make: *** [Makefile:19: test] Error 1
```
Is there anything else I should do?
The error you got is minor and appeared in the last version of pyarrow, we'll fix the CI to take this into account. You can ignore it |
https://github.com/huggingface/datasets/pull/2030 | Implement Dataset from text | I am wondering why only one test of "keep_in_memory=True" fails, when there are many other tests that test the same and it happens only in pyarrow_1... | Implement `Dataset.from_text`.
Analogue to #1943, #1946. | 26 | text: Implement Dataset from text
Implement `Dataset.from_text`.
Analogue to #1943, #1946.
I am wondering why only one test of "keep_in_memory=True" fails, when there are many other tests that test the same and it happens only in pyarrow_1... |
https://github.com/huggingface/datasets/pull/2028 | Adding PersiNLU reading-comprehension | Thanks! @lhoestq Let me know if you want me to address anything to get this merged. | 16 | text: Adding PersiNLU reading-comprehension
Thanks! @lhoestq Let me know if you want me to address anything to get this merged. |
|
https://github.com/huggingface/datasets/pull/2025 | [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset | There is one more thing I would love to see. Let's say we iteratively keep updating a data source that loaded from **load_dataset** or **load_from_disk**. Now we need to save it to the same location by overriding the previous file inorder to save the disk space. At the moment **save_to_disk** can not assign a name. So I do not see an easy way to override the previous files. @lhoestq is this possible?
p.s one last thing?
Is there a way to flush out any connection to a data source loaded from **load_from_disk** or **load_dataset** methods? At the moment I suspect when we use any of those functions, it will always keep a pointer although we override it again with a new version of the dataset source. This is really useful in an iterative process.
| ## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877 | 134 | text: [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset
## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877
There is one more thing I would love to see. Let's say we iteratively keep updating a data source that loaded from **load_dataset** or **load_from_disk**. Now we need to save it to the same location by overriding the previous file inorder to save the disk space. At the moment **save_to_disk** can not assign a name. So I do not see an easy way to override the previous files. @lhoestq is this possible?
p.s one last thing?
Is there a way to flush out any connection to a data source loaded from **load_from_disk** or **load_dataset** methods? At the moment I suspect when we use any of those functions, it will always keep a pointer although we override it again with a new version of the dataset source. This is really useful in an iterative process.
|
https://github.com/huggingface/datasets/pull/2025 | [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset | > There is one more thing I would love to see. Let's say we iteratively keep updating a data source that loaded from **load_dataset** or **load_from_disk**. Now we need to save it to the same location by overriding the previous file inorder to save the disk space. At the moment **save_to_disk** can not assign a name. So I do not see an easy way to override the previous files. @lhoestq is this possible?
In the new save_to_disk, the filename of the arrow file is fixed: `dataset.arrow`.
This way is will be overwritten if you save your dataset again
> Is there a way to flush out any connection to a data source loaded from **load_from_disk** or **load_dataset** methods? At the moment I suspect when we use any of those functions, it will always keep a pointer although we override it again with a new version of the dataset source. This is really useful in an iterative process.
If you update an arrow file, then you must reload it with `load_from_disk` for example in order to have the updated data.
Does that answer the question ? How does this "pointer" behavior manifest exactly on your side ? | ## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877 | 196 | text: [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset
## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877
> There is one more thing I would love to see. Let's say we iteratively keep updating a data source that loaded from **load_dataset** or **load_from_disk**. Now we need to save it to the same location by overriding the previous file inorder to save the disk space. At the moment **save_to_disk** can not assign a name. So I do not see an easy way to override the previous files. @lhoestq is this possible?
In the new save_to_disk, the filename of the arrow file is fixed: `dataset.arrow`.
This way is will be overwritten if you save your dataset again
> Is there a way to flush out any connection to a data source loaded from **load_from_disk** or **load_dataset** methods? At the moment I suspect when we use any of those functions, it will always keep a pointer although we override it again with a new version of the dataset source. This is really useful in an iterative process.
If you update an arrow file, then you must reload it with `load_from_disk` for example in order to have the updated data.
Does that answer the question ? How does this "pointer" behavior manifest exactly on your side ? |
https://github.com/huggingface/datasets/pull/2025 | [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset | Apparently the usage of the compute layer of pyarrow requires pyarrow>=1.0.0 (otherwise there are some issues on windows with file permissions when doing dataset concatenation).
I'll bump the pyarrow requirement from, 0.17.1 to 1.0.0 | ## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877 | 34 | text: [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset
## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877
Apparently the usage of the compute layer of pyarrow requires pyarrow>=1.0.0 (otherwise there are some issues on windows with file permissions when doing dataset concatenation).
I'll bump the pyarrow requirement from, 0.17.1 to 1.0.0 |
https://github.com/huggingface/datasets/pull/2025 | [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset |
> If you update an arrow file, then you must reload it with `load_from_disk` for example in order to have the updated data.
> Does that answer the question? How does this "pointer" behavior manifest exactly on your side?
Yes, I checked this behavior.. if we update the .arrow file it kind of flushes out the previous one. So your solution is perfect <3. | ## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877 | 64 | text: [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset
## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877
> If you update an arrow file, then you must reload it with `load_from_disk` for example in order to have the updated data.
> Does that answer the question? How does this "pointer" behavior manifest exactly on your side?
Yes, I checked this behavior.. if we update the .arrow file it kind of flushes out the previous one. So your solution is perfect <3. |
https://github.com/huggingface/datasets/pull/2025 | [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset | Sorry for spamming, there's a a bug that only happens on the CI so I have to re-run it several times | ## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877 | 21 | text: [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset
## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877
Sorry for spamming, there's a a bug that only happens on the CI so I have to re-run it several times |
https://github.com/huggingface/datasets/pull/2025 | [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset | Alright I finally added all the tests I wanted !
I also fixed all the bugs and now all the tests are passing :)
Let me know if you have comments.
I also noticed that two methods in pyarrow seem to bring some data in memory even for a memory mapped table: filter and cast:
- for filter I took a look at the C++ code on the arrow's side and found [this part](https://github.com/apache/arrow/blob/55c8d74d5556b25238fb2028e9fb97290ea24684/cpp/src/arrow/compute/kernels/vector_selection.cc#L93-L160) that "builds" the array during filter. It seems to indicate that it allocates new memory for the filtered array but not 100% sure.
- regarding cast I noticed that it happens when changing the precision of an array of integers. Not sure if there are other cases.
Maybe we'll need to investigate this a bit for your PR on improving `filter` @theo-m , since we don't want to fill the users memory. | ## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877 | 146 | text: [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset
## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877
Alright I finally added all the tests I wanted !
I also fixed all the bugs and now all the tests are passing :)
Let me know if you have comments.
I also noticed that two methods in pyarrow seem to bring some data in memory even for a memory mapped table: filter and cast:
- for filter I took a look at the C++ code on the arrow's side and found [this part](https://github.com/apache/arrow/blob/55c8d74d5556b25238fb2028e9fb97290ea24684/cpp/src/arrow/compute/kernels/vector_selection.cc#L93-L160) that "builds" the array during filter. It seems to indicate that it allocates new memory for the filtered array but not 100% sure.
- regarding cast I noticed that it happens when changing the precision of an array of integers. Not sure if there are other cases.
Maybe we'll need to investigate this a bit for your PR on improving `filter` @theo-m , since we don't want to fill the users memory. |
https://github.com/huggingface/datasets/pull/2025 | [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset | > Maybe we'll need to investigate this a bit for your PR on improving `filter` @theo-m , since we don't want to fill the users memory.
I'm a bit unclear on this, I thought the point of the refactor was to use `Table.filter` to speed up our own `.filter` and stop using `.map` that offloaded too much stuff on disk.
At some point I recall we decided to use `keep_in_memory=True` as the expectations were that it would be hard to fill the memory? | ## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877 | 83 | text: [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset
## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877
> Maybe we'll need to investigate this a bit for your PR on improving `filter` @theo-m , since we don't want to fill the users memory.
I'm a bit unclear on this, I thought the point of the refactor was to use `Table.filter` to speed up our own `.filter` and stop using `.map` that offloaded too much stuff on disk.
At some point I recall we decided to use `keep_in_memory=True` as the expectations were that it would be hard to fill the memory? |
https://github.com/huggingface/datasets/pull/2025 | [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset | > I'm a bit unclear on this, I thought the point of the refactor was to use Table.filter to speed up our own .filter and stop using .map that offloaded too much stuff on disk.
> At some point I recall we decided to use keep_in_memory=True as the expectations were that it would be hard to fill the memory?
Yes it's ok to have the mask in memory, but not the full table. I was not aware that the table returned by filter could actually be in memory (it's not part of the pyarrow documentation afaik).
To be more specific I noticed that every time you call `filter`, the pyarrow total allocated memory increases.
I haven't checked on a big dataset though, but it would be nice to see how much memory it uses with respect to the size of the dataset. | ## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877 | 142 | text: [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset
## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877
> I'm a bit unclear on this, I thought the point of the refactor was to use Table.filter to speed up our own .filter and stop using .map that offloaded too much stuff on disk.
> At some point I recall we decided to use keep_in_memory=True as the expectations were that it would be hard to fill the memory?
Yes it's ok to have the mask in memory, but not the full table. I was not aware that the table returned by filter could actually be in memory (it's not part of the pyarrow documentation afaik).
To be more specific I noticed that every time you call `filter`, the pyarrow total allocated memory increases.
I haven't checked on a big dataset though, but it would be nice to see how much memory it uses with respect to the size of the dataset. |
https://github.com/huggingface/datasets/pull/2025 | [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset | I totally agree with you. I would have loved to use inheritance instead.
However because `pa.Table` is a cython class without proper initialization methods (you can't call `__init__` for example): you can't instantiate a subclass of `pa.Table` in python.
To be more specific, you actually can try to instantiate a subclass of `pa.Table` with no data BUT this is not a valid table so you get an error.
And since `pa.Table` objects are immutable you can't even set the data in `__new__` or `__init__`.
EDIT: one could make a new cython class that inherits from `pa.Table` with proper initialization methods, so that we can inherit from this class instead in python. We can do that in the future if we plan to use cython in `datasets`.
(see: https://arrow.apache.org/docs/python/extending.html) | ## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877 | 128 | text: [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset
## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877
I totally agree with you. I would have loved to use inheritance instead.
However because `pa.Table` is a cython class without proper initialization methods (you can't call `__init__` for example): you can't instantiate a subclass of `pa.Table` in python.
To be more specific, you actually can try to instantiate a subclass of `pa.Table` with no data BUT this is not a valid table so you get an error.
And since `pa.Table` objects are immutable you can't even set the data in `__new__` or `__init__`.
EDIT: one could make a new cython class that inherits from `pa.Table` with proper initialization methods, so that we can inherit from this class instead in python. We can do that in the future if we plan to use cython in `datasets`.
(see: https://arrow.apache.org/docs/python/extending.html) |
https://github.com/huggingface/datasets/pull/2025 | [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset | @lhoestq, but in which cases you would like to instantiate directly either `InMemoryTable` or `MemoryMappedTable`? You normally use one of their `from_xxx` class methods... | ## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877 | 24 | text: [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset
## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877
@lhoestq, but in which cases you would like to instantiate directly either `InMemoryTable` or `MemoryMappedTable`? You normally use one of their `from_xxx` class methods... |
https://github.com/huggingface/datasets/pull/2025 | [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset | Yes I was thinking of these cases. The issue is that they return `pa.Table` objects even from a subclass of `pa.Table` | ## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877 | 21 | text: [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset
## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877
Yes I was thinking of these cases. The issue is that they return `pa.Table` objects even from a subclass of `pa.Table` |
https://github.com/huggingface/datasets/pull/2025 | [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset | I guess that in this case, the best approach is as you did, using composition over inheritance...
https://github.com/apache/arrow/pull/5322 | ## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877 | 18 | text: [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset
## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877
I guess that in this case, the best approach is as you did, using composition over inheritance...
https://github.com/apache/arrow/pull/5322 |
https://github.com/huggingface/datasets/pull/2023 | Add Romanian to XQuAD | Hi ! Thanks for updating XQUAD :)
The slow test is failing though since there's no dummy data nor metadata in dataset_infos.json for the romanian configuration.
Could you please generate the dummy data with
```
datasets-cli dummy_data ./datasets/xquad --auto_generate --json_field data
```
This will update all the dummy data files, and also add the new one for the romanian configuration.
You can also update the metadata with
```
datasets-cli test ./datasets/xquad --name xquad.ro --save_infos
```
This will update the dataset_infos.json file with the metadata of the romanian config :)
Thanks in advance ! | On Jan 18, XQuAD was updated with a new Romanian validation file ([xquad commit link](https://github.com/deepmind/xquad/commit/60cac411649156efb6aab9dd4c9cde787a2c0345))
| 93 | text: Add Romanian to XQuAD
On Jan 18, XQuAD was updated with a new Romanian validation file ([xquad commit link](https://github.com/deepmind/xquad/commit/60cac411649156efb6aab9dd4c9cde787a2c0345))
Hi ! Thanks for updating XQUAD :)
The slow test is failing though since there's no dummy data nor metadata in dataset_infos.json for the romanian configuration.
Could you please generate the dummy data with
```
datasets-cli dummy_data ./datasets/xquad --auto_generate --json_field data
```
This will update all the dummy data files, and also add the new one for the romanian configuration.
You can also update the metadata with
```
datasets-cli test ./datasets/xquad --name xquad.ro --save_infos
```
This will update the dataset_infos.json file with the metadata of the romanian config :)
Thanks in advance ! |
https://github.com/huggingface/datasets/pull/2023 | Add Romanian to XQuAD | Hello Quentin, and thanks for your help.
I found that running
```python
datasets-cli test ./datasets/xquad --name xquad.ro --save_infos
```
was not enough to pass the slow tests, because it was not adding the new `xquad.ro.json` checksum to the other configs infos and becuase of that an `UnexpectedDownloadedFile` error was being thrown, so instead I used:
```python
datasets-cli test ./datasets/xquad --save_infos --all_configs --ignore_verifications
```
`--ignore_verifications` was necessary to bypass the same `UnexpectedDownloadedFile` error.
Additionally, I deleted `dummy_data_copy.zip` and the `copy.sh` script because they both seem now unnecessary.
The slow tests for both the real and dummy data now pass successfully, so I hope that I didn't mess anything up :)
| On Jan 18, XQuAD was updated with a new Romanian validation file ([xquad commit link](https://github.com/deepmind/xquad/commit/60cac411649156efb6aab9dd4c9cde787a2c0345))
| 109 | text: Add Romanian to XQuAD
On Jan 18, XQuAD was updated with a new Romanian validation file ([xquad commit link](https://github.com/deepmind/xquad/commit/60cac411649156efb6aab9dd4c9cde787a2c0345))
Hello Quentin, and thanks for your help.
I found that running
```python
datasets-cli test ./datasets/xquad --name xquad.ro --save_infos
```
was not enough to pass the slow tests, because it was not adding the new `xquad.ro.json` checksum to the other configs infos and becuase of that an `UnexpectedDownloadedFile` error was being thrown, so instead I used:
```python
datasets-cli test ./datasets/xquad --save_infos --all_configs --ignore_verifications
```
`--ignore_verifications` was necessary to bypass the same `UnexpectedDownloadedFile` error.
Additionally, I deleted `dummy_data_copy.zip` and the `copy.sh` script because they both seem now unnecessary.
The slow tests for both the real and dummy data now pass successfully, so I hope that I didn't mess anything up :)
|