html_url
stringlengths 47
49
| title
stringlengths 4
111
| comments
stringlengths 71
20.4k
| body
stringlengths 0
12.9k
β | comment_length_in_words
int64 16
1.61k
| text
stringlengths 100
20.5k
|
---|---|---|---|---|---|
https://github.com/huggingface/datasets/pull/2416 | Add KLUE dataset | > > I'm not sure why I got error like below when I auto-generate dummy data "mrc"
> > ```
> > datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
> > Found duplicate Key: 0
> > Keys should be unique and deterministic in nature
> > ```
>
> Please check out the suggestion below. I think it might be a cause.
The problem was `id_` in mrc when yield was not unique. (I used index in `enumerate(paragraphs)` by mistake)
I fixed it and update all the things | Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks).
Please let me know if there's anything missing in the code or README.
Thanks!
| 88 | text: Add KLUE dataset
Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks).
Please let me know if there's anything missing in the code or README.
Thanks!
> > I'm not sure why I got error like below when I auto-generate dummy data "mrc"
> > ```
> > datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
> > Found duplicate Key: 0
> > Keys should be unique and deterministic in nature
> > ```
>
> Please check out the suggestion below. I think it might be a cause.
The problem was `id_` in mrc when yield was not unique. (I used index in `enumerate(paragraphs)` by mistake)
I fixed it and update all the things |
https://github.com/huggingface/datasets/pull/2416 | Add KLUE dataset | To fix the CI you can just merge master into your branch and it should be all green hopefully :) | Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks).
Please let me know if there's anything missing in the code or README.
Thanks!
| 20 | text: Add KLUE dataset
Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks).
Please let me know if there's anything missing in the code or README.
Thanks!
To fix the CI you can just merge master into your branch and it should be all green hopefully :) |
https://github.com/huggingface/datasets/pull/2416 | Add KLUE dataset | @lhoestq
Thanks for reviewing!
It's harder than I thought to add dataset card. π
I checked and updated your suggestion (script, readme details, dummy data).
dummy data is little bit larger than expected because `ner` dataset is about 80 lines and `dp` dataset is about 25 lines to avoid 0 examples.
I'm not sure why some CI keep fails, can u check for this? | Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks).
Please let me know if there's anything missing in the code or README.
Thanks!
| 64 | text: Add KLUE dataset
Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks).
Please let me know if there's anything missing in the code or README.
Thanks!
@lhoestq
Thanks for reviewing!
It's harder than I thought to add dataset card. π
I checked and updated your suggestion (script, readme details, dummy data).
dummy data is little bit larger than expected because `ner` dataset is about 80 lines and `dp` dataset is about 25 lines to avoid 0 examples.
I'm not sure why some CI keep fails, can u check for this? |
https://github.com/huggingface/datasets/pull/2416 | Add KLUE dataset | Thanks ! That makes sense for ner and dp
For mrc on the other hand there are still too many examples, maybe you can generate the dummy data for 5 examples for all tasks except ner and dp ? | Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks).
Please let me know if there's anything missing in the code or README.
Thanks!
| 39 | text: Add KLUE dataset
Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks).
Please let me know if there's anything missing in the code or README.
Thanks!
Thanks ! That makes sense for ner and dp
For mrc on the other hand there are still too many examples, maybe you can generate the dummy data for 5 examples for all tasks except ner and dp ? |
https://github.com/huggingface/datasets/pull/2416 | Add KLUE dataset | > Thanks ! That makes sense for ner and dp
>
> For mrc on the other hand there are still too many examples, maybe you can generate the dummy data for 5 examples for all tasks except ner and dp ?
Yes, I generate default lines in dataset-cli for other dataset except "dp" and "ner"
I fixed mrc dataset, hope it's fine now :)
the reason CI failed was I forgot to merge master into my branch π
| Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks).
Please let me know if there's anything missing in the code or README.
Thanks!
| 79 | text: Add KLUE dataset
Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks).
Please let me know if there's anything missing in the code or README.
Thanks!
> Thanks ! That makes sense for ner and dp
>
> For mrc on the other hand there are still too many examples, maybe you can generate the dummy data for 5 examples for all tasks except ner and dp ?
Yes, I generate default lines in dataset-cli for other dataset except "dp" and "ner"
I fixed mrc dataset, hope it's fine now :)
the reason CI failed was I forgot to merge master into my branch π
|
https://github.com/huggingface/datasets/pull/2414 | Update README.md | Merging since the CI error is unrelated to this PR and has been fixed on master | Provides description of data instances and dataset features
| 16 | text: Update README.md
Provides description of data instances and dataset features
Merging since the CI error is unrelated to this PR and has been fixed on master |
https://github.com/huggingface/datasets/pull/2414 | Update README.md | Thank you for taking a look at the CI error - I was a bit confused with that. Thanks! | Provides description of data instances and dataset features
| 19 | text: Update README.md
Provides description of data instances and dataset features
Thank you for taking a look at the CI error - I was a bit confused with that. Thanks! |
https://github.com/huggingface/datasets/pull/2409 | Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | I thought the renaming was suggested only for the env var, and not for the config variable... As you think is better! ;) | As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_ | 23 | text: Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_
I thought the renaming was suggested only for the env var, and not for the config variable... As you think is better! ;) |
https://github.com/huggingface/datasets/pull/2409 | Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | I think it's better if they match, so that users understand directly that they're directly connected | As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_ | 16 | text: Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_
I think it's better if they match, so that users understand directly that they're directly connected |
https://github.com/huggingface/datasets/pull/2409 | Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | Well, if you're not concerned about back-compat here, perhaps it could be renamed and shortened too ;)
I'd suggest one of:
* `HF_DATASETS_IN_MEMORY_MAX_SIZE`
* `HF_DATASETS_MAX_IN_MEMORY_SIZE`
the itention is to:
1. make it consistent with all the other `datasets` env vars which all start with `HF_DATASETS_`, e.g.:
```
HF_DATASETS_CACHE
HF_DATASETS_OFFLINE
```
2. allow to recode in the future to support 1M, 4K, 1T and not just bytes - bytes is not a great choice for this type of variable since it will be at least X Mbytes for most reasonable uses.
And I agree with @albertvillanova that the config variable name shouldn't have the HF prefix - it's preaching to the choir - the user already knows it's a local variable.
The only reason we prefix env vars, is because they are used outside of the software.
But I do see a good point of you trying to make things consistent too. How about this:
`config.IN_MEMORY_MAX_SIZE` (or whatever the final env var will be minus `HF_DATASETS_` prefix).
This is of course just my opinion.
| As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_ | 173 | text: Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_
Well, if you're not concerned about back-compat here, perhaps it could be renamed and shortened too ;)
I'd suggest one of:
* `HF_DATASETS_IN_MEMORY_MAX_SIZE`
* `HF_DATASETS_MAX_IN_MEMORY_SIZE`
the itention is to:
1. make it consistent with all the other `datasets` env vars which all start with `HF_DATASETS_`, e.g.:
```
HF_DATASETS_CACHE
HF_DATASETS_OFFLINE
```
2. allow to recode in the future to support 1M, 4K, 1T and not just bytes - bytes is not a great choice for this type of variable since it will be at least X Mbytes for most reasonable uses.
And I agree with @albertvillanova that the config variable name shouldn't have the HF prefix - it's preaching to the choir - the user already knows it's a local variable.
The only reason we prefix env vars, is because they are used outside of the software.
But I do see a good point of you trying to make things consistent too. How about this:
`config.IN_MEMORY_MAX_SIZE` (or whatever the final env var will be minus `HF_DATASETS_` prefix).
This is of course just my opinion.
|
https://github.com/huggingface/datasets/pull/2409 | Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | Thanks for the comment :)
I like both propositions, and I agree this would be better in order to allow support for 1M, 1T etc.
Regarding the prefix of the variable in config.py I don't have a strong opinion. I just added it for consistency with the other variables that default to the env variables like HF_DATASETS_CACHE. However I agree this would be nice to have shorter names so I'm not against removing the prefix either. Since the feature is relatively new, I think we can still allow ourself to rename it | As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_ | 92 | text: Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_
Thanks for the comment :)
I like both propositions, and I agree this would be better in order to allow support for 1M, 1T etc.
Regarding the prefix of the variable in config.py I don't have a strong opinion. I just added it for consistency with the other variables that default to the env variables like HF_DATASETS_CACHE. However I agree this would be nice to have shorter names so I'm not against removing the prefix either. Since the feature is relatively new, I think we can still allow ourself to rename it |
https://github.com/huggingface/datasets/pull/2409 | Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | Awesome,
Let's use then:
- `HF_DATASETS_IN_MEMORY_MAX_SIZE` for the env var
- `config.IN_MEMORY_MAX_SIZE` for config.
and for now bytes will be documented as the only option and down the road add support for K/M/G.
@albertvillanova, does that sound good to you? | As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_ | 40 | text: Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_
Awesome,
Let's use then:
- `HF_DATASETS_IN_MEMORY_MAX_SIZE` for the env var
- `config.IN_MEMORY_MAX_SIZE` for config.
and for now bytes will be documented as the only option and down the road add support for K/M/G.
@albertvillanova, does that sound good to you? |
https://github.com/huggingface/datasets/pull/2409 | Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | Did I miss a PR with this change?
I want to make sure to add it to transformers tests to avoid the overheard of rebuilding the datasets.
Thank you! | As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_ | 29 | text: Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_
Did I miss a PR with this change?
I want to make sure to add it to transformers tests to avoid the overheard of rebuilding the datasets.
Thank you! |
https://github.com/huggingface/datasets/pull/2409 | Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | @stas00 I'm taking on this now that I have finally finished the collaborative training experiment. Sorry for the delay. | As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_ | 19 | text: Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_
@stas00 I'm taking on this now that I have finally finished the collaborative training experiment. Sorry for the delay. |
https://github.com/huggingface/datasets/pull/2409 | Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | Actually, why is this feature on by default?
Users are very unlikely to understand what is going on or to know where to look. Should it at the very least emit a warning that this was done w/o asking the user to do so and how to turn it off?
IMHO, this feature should be enabled explicitly by those who want it and not be On by default. This is an optimization that benefits only select users and is a burden on the rest.
In my line of dev/debug work (multiple short runs that have to be very fast) now I have to remember to disable this feature explicitly on every machine I work :(
| As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_ | 115 | text: Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_
Actually, why is this feature on by default?
Users are very unlikely to understand what is going on or to know where to look. Should it at the very least emit a warning that this was done w/o asking the user to do so and how to turn it off?
IMHO, this feature should be enabled explicitly by those who want it and not be On by default. This is an optimization that benefits only select users and is a burden on the rest.
In my line of dev/debug work (multiple short runs that have to be very fast) now I have to remember to disable this feature explicitly on every machine I work :(
|
https://github.com/huggingface/datasets/pull/2409 | Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | Having the dataset in memory is nice to get the speed but I agree that the lack of caching for dataset in memory is an issue. By default we always had caching on.
Here the issue is that in-memory datasets are still not able to use the cache - we should fix this asap IMO.
Here is the PR that fixes this: https://github.com/huggingface/datasets/pull/2329 | As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_ | 63 | text: Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_
Having the dataset in memory is nice to get the speed but I agree that the lack of caching for dataset in memory is an issue. By default we always had caching on.
Here the issue is that in-memory datasets are still not able to use the cache - we should fix this asap IMO.
Here is the PR that fixes this: https://github.com/huggingface/datasets/pull/2329 |
https://github.com/huggingface/datasets/pull/2409 | Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | But why do they have to be datasets in memory in the first place? Why not just have the default that all datasets are normal and are cached which seems to be working solidly. And only enable in memory datasets explicitly if the user chooses to and then it doesn't matter if it's cached on not for the majority of the users who will not make this choice.
I mean the definition of in-memory-datasets is very arbitrary - why 250MB and not 5GB? It's very likely that the user will want to set this threshold based on their RAM availability. So while doing that they can enable the in-memory-datasets. Unless I'm missing something here.
The intention here is that things work well in general out of the box, and further performance optimizations are available to those who know what they are doing.
| As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_ | 142 | text: Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_
But why do they have to be datasets in memory in the first place? Why not just have the default that all datasets are normal and are cached which seems to be working solidly. And only enable in memory datasets explicitly if the user chooses to and then it doesn't matter if it's cached on not for the majority of the users who will not make this choice.
I mean the definition of in-memory-datasets is very arbitrary - why 250MB and not 5GB? It's very likely that the user will want to set this threshold based on their RAM availability. So while doing that they can enable the in-memory-datasets. Unless I'm missing something here.
The intention here is that things work well in general out of the box, and further performance optimizations are available to those who know what they are doing.
|
https://github.com/huggingface/datasets/pull/2409 | Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | This is just for speed improvements, especially for data exploration/experiments in notebooks. Ideally it shouldn't have changed anything regarding caching behavior in the first place (i.e. have the caching enabled by default).
The 250MB limit has also been chosen to not create unexpected high memory usage on small laptops. | As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_ | 49 | text: Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_
This is just for speed improvements, especially for data exploration/experiments in notebooks. Ideally it shouldn't have changed anything regarding caching behavior in the first place (i.e. have the caching enabled by default).
The 250MB limit has also been chosen to not create unexpected high memory usage on small laptops. |
https://github.com/huggingface/datasets/pull/2409 | Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | Won't it be more straight-forward to create a performance optimization doc and share all these optimizations there? That way the user will be in the knowing and will be able to get faster speeds if their RAM is large.
It is hard for me to tell the average size of a dataset an average user will have, but my gut feeling is that many NLP datasets are larger than 250MB. Please correct me if I'm wrong.
But at the same time what you're saying is that once https://github.com/huggingface/datasets/pull/2329 is completed and merged, the in-memory-datasets will be cached too. So if I wait long enough the whole issue will go away altogether, correct? | As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_ | 112 | text: Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_
Won't it be more straight-forward to create a performance optimization doc and share all these optimizations there? That way the user will be in the knowing and will be able to get faster speeds if their RAM is large.
It is hard for me to tell the average size of a dataset an average user will have, but my gut feeling is that many NLP datasets are larger than 250MB. Please correct me if I'm wrong.
But at the same time what you're saying is that once https://github.com/huggingface/datasets/pull/2329 is completed and merged, the in-memory-datasets will be cached too. So if I wait long enough the whole issue will go away altogether, correct? |
https://github.com/huggingface/datasets/pull/2404 | Paperswithcode dataset mapping | live mapping can be found at https://huggingface.co/api/pwc/datasets-mapping and will be kept up to date going forward | This is a continuation of https://github.com/huggingface/huggingface_hub/pull/43, encoded directly inside dataset cards.
As discussed:
- `paperswithcode_id: null` when the dataset doesn't exist on paperswithcode's side.
- I've added this new key at the end of the yaml instead of ordering all keys alphabetically as pyyaml's default. No strong opinion on that one though
| 16 | text: Paperswithcode dataset mapping
This is a continuation of https://github.com/huggingface/huggingface_hub/pull/43, encoded directly inside dataset cards.
As discussed:
- `paperswithcode_id: null` when the dataset doesn't exist on paperswithcode's side.
- I've added this new key at the end of the yaml instead of ordering all keys alphabetically as pyyaml's default. No strong opinion on that one though
live mapping can be found at https://huggingface.co/api/pwc/datasets-mapping and will be kept up to date going forward |
https://github.com/huggingface/datasets/pull/2399 | Add env variable for MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | Thank you for clarifying the precedence, @albertvillanova
Isn't it typically the case where env vars have the highest precedence?
In my understanding the point of env vars is to be able to override software w/o needing to touch the code.
Please correct me if this is not so in the general case. | Add env variable for `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES`.
This will allow to turn off default behavior: loading in memory (and not caching) small datasets.
Fix #2387. | 52 | text: Add env variable for MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
Add env variable for `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES`.
This will allow to turn off default behavior: loading in memory (and not caching) small datasets.
Fix #2387.
Thank you for clarifying the precedence, @albertvillanova
Isn't it typically the case where env vars have the highest precedence?
In my understanding the point of env vars is to be able to override software w/o needing to touch the code.
Please correct me if this is not so in the general case. |
https://github.com/huggingface/datasets/pull/2399 | Add env variable for MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | Hi @stas00,
Well, I'm not an expert on this topic, but the precedence hierarchy I have normally found is from higher to lower:
- command line parameters
- env vars
- config files
So yes, normally env vars have precedence over configuration files.
Anyway, for Datasets, there are no configuration files. The _in-memory_ config is set from default values or env vars (which have precedence over default values). But this is done at import.
However, once the library is imported, the user can modify the in-memory config, and this will have precedence over the rest of mechanisms (which take place only at import). | Add env variable for `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES`.
This will allow to turn off default behavior: loading in memory (and not caching) small datasets.
Fix #2387. | 103 | text: Add env variable for MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
Add env variable for `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES`.
This will allow to turn off default behavior: loading in memory (and not caching) small datasets.
Fix #2387.
Hi @stas00,
Well, I'm not an expert on this topic, but the precedence hierarchy I have normally found is from higher to lower:
- command line parameters
- env vars
- config files
So yes, normally env vars have precedence over configuration files.
Anyway, for Datasets, there are no configuration files. The _in-memory_ config is set from default values or env vars (which have precedence over default values). But this is done at import.
However, once the library is imported, the user can modify the in-memory config, and this will have precedence over the rest of mechanisms (which take place only at import). |
https://github.com/huggingface/datasets/pull/2399 | Add env variable for MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | In my limited experience env vars are typically above cmd line args, so that one can override scripts with cmd lines using env vars, but usually one then uses env vars inside cmd line args, so it's loud and clear.
For example specifying a specific gpu number on a command line will depend on `CUDA_VISIBLE_DEVICES` so gpu0 will be different if someone sets `CUDA_VISIBLE_DEVICES=2,3` vs `CUDA_VISIBLE_DEVICES=0,1`.
> However, once the library is imported, the user can modify the in-memory config, and this will have precedence over the rest of mechanisms (which take place only at import).
And this is exactly the problem we are trying to solve here. For a good reason HF examples don't want to use `keep_in_memory=False`, and they may choose to now set `datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES` and which means we again can't override it via env var.
| Add env variable for `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES`.
This will allow to turn off default behavior: loading in memory (and not caching) small datasets.
Fix #2387. | 138 | text: Add env variable for MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
Add env variable for `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES`.
This will allow to turn off default behavior: loading in memory (and not caching) small datasets.
Fix #2387.
In my limited experience env vars are typically above cmd line args, so that one can override scripts with cmd lines using env vars, but usually one then uses env vars inside cmd line args, so it's loud and clear.
For example specifying a specific gpu number on a command line will depend on `CUDA_VISIBLE_DEVICES` so gpu0 will be different if someone sets `CUDA_VISIBLE_DEVICES=2,3` vs `CUDA_VISIBLE_DEVICES=0,1`.
> However, once the library is imported, the user can modify the in-memory config, and this will have precedence over the rest of mechanisms (which take place only at import).
And this is exactly the problem we are trying to solve here. For a good reason HF examples don't want to use `keep_in_memory=False`, and they may choose to now set `datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES` and which means we again can't override it via env var.
|
https://github.com/huggingface/datasets/pull/2399 | Add env variable for MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | oops, sorry, didn't think earlier - do we need to prefix this with `HF_DATASETS` or `HF_` like all the other env vars? or is it long enough already to be unique - it's just not telling the user in the config file what projet this variable is for... | Add env variable for `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES`.
This will allow to turn off default behavior: loading in memory (and not caching) small datasets.
Fix #2387. | 48 | text: Add env variable for MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
Add env variable for `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES`.
This will allow to turn off default behavior: loading in memory (and not caching) small datasets.
Fix #2387.
oops, sorry, didn't think earlier - do we need to prefix this with `HF_DATASETS` or `HF_` like all the other env vars? or is it long enough already to be unique - it's just not telling the user in the config file what projet this variable is for... |
https://github.com/huggingface/datasets/pull/2397 | Fix number of classes in indic_glue sna.bn dataset | @lhoestq there are many things missing in the README.md file, but this correction is right despite not passing the validation tests... | As read in the [paper](https://www.aclweb.org/anthology/2020.findings-emnlp.445.pdf), Table 11. | 21 | text: Fix number of classes in indic_glue sna.bn dataset
As read in the [paper](https://www.aclweb.org/anthology/2020.findings-emnlp.445.pdf), Table 11.
@lhoestq there are many things missing in the README.md file, but this correction is right despite not passing the validation tests... |
https://github.com/huggingface/datasets/pull/2397 | Fix number of classes in indic_glue sna.bn dataset | Yes indeed. We run the validation in all modified readme because we think that it is the time when contributors are the most likely to fix a dataset card - or it will never be done | As read in the [paper](https://www.aclweb.org/anthology/2020.findings-emnlp.445.pdf), Table 11. | 36 | text: Fix number of classes in indic_glue sna.bn dataset
As read in the [paper](https://www.aclweb.org/anthology/2020.findings-emnlp.445.pdf), Table 11.
Yes indeed. We run the validation in all modified readme because we think that it is the time when contributors are the most likely to fix a dataset card - or it will never be done |
https://github.com/huggingface/datasets/pull/2395 | `pretty_name` for dataset in YAML tags | Initially I removed the ` - ` since there was only one `pretty_name` per config but turns out it was breaking here in `from_yaml_string`https://github.com/huggingface/datasets/blob/74751e3f98c74d22c48c6beb1fab0c13b5dfd075/src/datasets/utils/metadata.py#L197 in `/utils/metadata.py` | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. | 26 | text: `pretty_name` for dataset in YAML tags
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
Initially I removed the ` - ` since there was only one `pretty_name` per config but turns out it was breaking here in `from_yaml_string`https://github.com/huggingface/datasets/blob/74751e3f98c74d22c48c6beb1fab0c13b5dfd075/src/datasets/utils/metadata.py#L197 in `/utils/metadata.py` |
https://github.com/huggingface/datasets/pull/2395 | `pretty_name` for dataset in YAML tags | Looks like the parser doesn't allow things like
```
pretty_name:
config_name1: My awesome config number 1
config_name2: My amazing config number 2
```
therefore you had to use `-` and consider them as a list.
I would be nice to add support for this case in the validator.
There's one thing though: the DatasetMetadata object currently corresponds to the yaml tags that are flattened: the config names are just ignored, and the lists are concatenated.
Therefore I think we would potentially need to instantiate several `DatasetMetadata` objects: one per config. Otherwise we would end up with a list of several pretty_name while we actually need at most 1 per config.
What do you think @gchhablani ? | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. | 116 | text: `pretty_name` for dataset in YAML tags
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
Looks like the parser doesn't allow things like
```
pretty_name:
config_name1: My awesome config number 1
config_name2: My amazing config number 2
```
therefore you had to use `-` and consider them as a list.
I would be nice to add support for this case in the validator.
There's one thing though: the DatasetMetadata object currently corresponds to the yaml tags that are flattened: the config names are just ignored, and the lists are concatenated.
Therefore I think we would potentially need to instantiate several `DatasetMetadata` objects: one per config. Otherwise we would end up with a list of several pretty_name while we actually need at most 1 per config.
What do you think @gchhablani ? |
https://github.com/huggingface/datasets/pull/2395 | `pretty_name` for dataset in YAML tags | I was thinking of returning `metada_dict` (on line 193) whenever `load_dataset_card` is called (we can pass an extra parameter to `from_readme` or `from_yaml_string` for that to happen).
One just needs config_name as key for the dictionary inside `pretty_name` dict and for single config, there would be only one value to print. We can do this for other fields as well like `size_categories`, `languages` etc. This will obviate the need to flatten the YAML tags so that don't have to instantiate several DatasetMetadata objects. What do you guys think @lhoestq @gchhablani?
Update: I was thinking of returning the whole dictionary before flattening so that user can access whatever they want with specific configs. Let's say [this](https://pastebin.com/eJ84314f) is my `metadata_dict` before flattening (the loaded YAML string), so instead of validating it and then returning the items individually we can return it just after loading the YAML string. | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. | 145 | text: `pretty_name` for dataset in YAML tags
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
I was thinking of returning `metada_dict` (on line 193) whenever `load_dataset_card` is called (we can pass an extra parameter to `from_readme` or `from_yaml_string` for that to happen).
One just needs config_name as key for the dictionary inside `pretty_name` dict and for single config, there would be only one value to print. We can do this for other fields as well like `size_categories`, `languages` etc. This will obviate the need to flatten the YAML tags so that don't have to instantiate several DatasetMetadata objects. What do you guys think @lhoestq @gchhablani?
Update: I was thinking of returning the whole dictionary before flattening so that user can access whatever they want with specific configs. Let's say [this](https://pastebin.com/eJ84314f) is my `metadata_dict` before flattening (the loaded YAML string), so instead of validating it and then returning the items individually we can return it just after loading the YAML string. |
https://github.com/huggingface/datasets/pull/2395 | `pretty_name` for dataset in YAML tags | Hi @lhoestq @bhavitvyamalik
@bhavitvyamalik, I'm not sure I understand your approach, can you please elaborate? The `metadata_dict` is flattened before instantiating the object, do you want to remove that? Still confused.
Few things come to my mind after going through this PR. They might not be entirely relevant to the current task, but I'm just trying to think about possible cases and discuss them here.
1. Instead of instantiating a new `DatasetMetadata` for each config with flattened tags, why can't we make it more flexible and validate only non-dict items? However, in that case, the types wouldn't be as strict for the class attributes. It would also not work for cases that are like `Dict[str,List[Dict[str,str]]`, but I guess that won't be needed anyway in the foreseeable future?
Ideally, it would be something like - Check the metadata tag type (root), do a DFS, and find the non-dict objects (leaves), and validate them. Is this an overkill to handle the problem?
2. For single-config datasets, there can be slightly different validation for `pretty_names`, than for multi-config. The user shouldn't need to provide a config name for single-config datasets, wdyt @bhavitvyamalik @lhoestq? Either way, for multi-config, the validation can use the dictionary keys in the path to that leaf node to verify `pretty_names: ... (config)` as well. This will check whether the config name is same as the key (might be unnecessary but prevents typos, so less work for the reviewer(s)). For future, however, it might be beneficial to have something like this.
3. Should we have a default config name for single-config datasets? People use any string they feel like. I've seen `plain_text`, `default` and the dataset name. I've used `image` for a few datasets myself AFAIR. For smarter validation (again, a future case ;-;), it'd be easier for us to have a simple rule for naming configs in single-config datasets. Wdyt @lhoestq? | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. | 312 | text: `pretty_name` for dataset in YAML tags
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
Hi @lhoestq @bhavitvyamalik
@bhavitvyamalik, I'm not sure I understand your approach, can you please elaborate? The `metadata_dict` is flattened before instantiating the object, do you want to remove that? Still confused.
Few things come to my mind after going through this PR. They might not be entirely relevant to the current task, but I'm just trying to think about possible cases and discuss them here.
1. Instead of instantiating a new `DatasetMetadata` for each config with flattened tags, why can't we make it more flexible and validate only non-dict items? However, in that case, the types wouldn't be as strict for the class attributes. It would also not work for cases that are like `Dict[str,List[Dict[str,str]]`, but I guess that won't be needed anyway in the foreseeable future?
Ideally, it would be something like - Check the metadata tag type (root), do a DFS, and find the non-dict objects (leaves), and validate them. Is this an overkill to handle the problem?
2. For single-config datasets, there can be slightly different validation for `pretty_names`, than for multi-config. The user shouldn't need to provide a config name for single-config datasets, wdyt @bhavitvyamalik @lhoestq? Either way, for multi-config, the validation can use the dictionary keys in the path to that leaf node to verify `pretty_names: ... (config)` as well. This will check whether the config name is same as the key (might be unnecessary but prevents typos, so less work for the reviewer(s)). For future, however, it might be beneficial to have something like this.
3. Should we have a default config name for single-config datasets? People use any string they feel like. I've seen `plain_text`, `default` and the dataset name. I've used `image` for a few datasets myself AFAIR. For smarter validation (again, a future case ;-;), it'd be easier for us to have a simple rule for naming configs in single-config datasets. Wdyt @lhoestq? |
https://github.com/huggingface/datasets/pull/2395 | `pretty_name` for dataset in YAML tags | Btw, `pretty_names` can also be used to handle this during validation :P
```
-# Dataset Card for [Dataset Name]
+# Dataset Card for Allegro Reviews
```
This is where `DatasetMetadata` and `ReadMe` should be combined. But there are very few overlaps, I guess.
@bhavitvyamalik @lhoestq What about adding a pretty name across all configs, and then config-specific names?
Like
```yaml
pretty_names:
all_configs: X (dataset_name)
config_1: X1 (config_1_name)
config_2: X2 (config_2_name)
```
Then, using the `metadata_dict`, the ReadMe header can be validated against `X`.
Sorry if I'm throwing too many ideas at once. | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. | 92 | text: `pretty_name` for dataset in YAML tags
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
Btw, `pretty_names` can also be used to handle this during validation :P
```
-# Dataset Card for [Dataset Name]
+# Dataset Card for Allegro Reviews
```
This is where `DatasetMetadata` and `ReadMe` should be combined. But there are very few overlaps, I guess.
@bhavitvyamalik @lhoestq What about adding a pretty name across all configs, and then config-specific names?
Like
```yaml
pretty_names:
all_configs: X (dataset_name)
config_1: X1 (config_1_name)
config_2: X2 (config_2_name)
```
Then, using the `metadata_dict`, the ReadMe header can be validated against `X`.
Sorry if I'm throwing too many ideas at once. |
https://github.com/huggingface/datasets/pull/2395 | `pretty_name` for dataset in YAML tags | @bhavitvyamalik
Now, I think I better understand what you're saying. So you want to skip validation for the unflattened metadata and just return it? And let the validation run for the flattened version? | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. | 33 | text: `pretty_name` for dataset in YAML tags
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
@bhavitvyamalik
Now, I think I better understand what you're saying. So you want to skip validation for the unflattened metadata and just return it? And let the validation run for the flattened version? |
https://github.com/huggingface/datasets/pull/2395 | `pretty_name` for dataset in YAML tags | Exactly! Validation is important but once the YAML tags are validated I feel we shouldn't do that again while calling `load_dataset_card`. +1 for default config name for single-config datasets. | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. | 29 | text: `pretty_name` for dataset in YAML tags
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
Exactly! Validation is important but once the YAML tags are validated I feel we shouldn't do that again while calling `load_dataset_card`. +1 for default config name for single-config datasets. |
https://github.com/huggingface/datasets/pull/2395 | `pretty_name` for dataset in YAML tags | @bhavitvyamalik
Actually, I made the `ReadMe` validation similar to `DatasetMetadata` validation and the class was validating the metadata during the creation.
Maybe we need to have a separate validation method instead of having it in `__post_init__`? Wdyt @lhoestq?
I'm sensing too many things to look into. It'd be great to discuss these sometime.
But if this PR is urgent then @bhavitvyamalik's logic seems good to me. It doesn't need major modifications in validation. | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. | 73 | text: `pretty_name` for dataset in YAML tags
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
@bhavitvyamalik
Actually, I made the `ReadMe` validation similar to `DatasetMetadata` validation and the class was validating the metadata during the creation.
Maybe we need to have a separate validation method instead of having it in `__post_init__`? Wdyt @lhoestq?
I'm sensing too many things to look into. It'd be great to discuss these sometime.
But if this PR is urgent then @bhavitvyamalik's logic seems good to me. It doesn't need major modifications in validation. |
https://github.com/huggingface/datasets/pull/2395 | `pretty_name` for dataset in YAML tags | > Maybe we need to have a separate validation method instead of having it in __post_init__? Wdyt @lhoestq?
We can definitely have a `is_valid()` method instead of doing it in the post init.
> What about adding a pretty name across all configs, and then config-specific names?
Let's keep things simple to starts with. If we can allow both single-config and multi-config cases it would already be great :)
for single-config:
```yaml
pretty_name: Allegro Reviews
```
for multi-config:
```yaml
pretty_name:
mrpc: Microsoft Research Paraphrase Corpus (MRPC)
sst2: Stanford Sentiment Treebank
...
```
To support the multi-config case I see two options:
1. Don't allow DatasetMetadata to have dictionaries but instead have separate DatasetMetadata objects per config
2. allow DatasetMetadata to have dictionaries. It implies to remove the flattening step. Then we could get metadata for a specific config this way for example:
```python
from datasets import load_dataset_card
glue_dataset_card = load_dataset_card("glue")
print(glue_dataset_card.metadata)
# DatasetMetatada object with dictionaries since there are many configs
print(glue_dataset_card.metadata.get_metadata_for_config("mrpc"))
# DatasetMetatada object with no dictionaries since there are only the mrpc tags
```
Let me know what you think or if you have other ideas. | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. | 188 | text: `pretty_name` for dataset in YAML tags
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
> Maybe we need to have a separate validation method instead of having it in __post_init__? Wdyt @lhoestq?
We can definitely have a `is_valid()` method instead of doing it in the post init.
> What about adding a pretty name across all configs, and then config-specific names?
Let's keep things simple to starts with. If we can allow both single-config and multi-config cases it would already be great :)
for single-config:
```yaml
pretty_name: Allegro Reviews
```
for multi-config:
```yaml
pretty_name:
mrpc: Microsoft Research Paraphrase Corpus (MRPC)
sst2: Stanford Sentiment Treebank
...
```
To support the multi-config case I see two options:
1. Don't allow DatasetMetadata to have dictionaries but instead have separate DatasetMetadata objects per config
2. allow DatasetMetadata to have dictionaries. It implies to remove the flattening step. Then we could get metadata for a specific config this way for example:
```python
from datasets import load_dataset_card
glue_dataset_card = load_dataset_card("glue")
print(glue_dataset_card.metadata)
# DatasetMetatada object with dictionaries since there are many configs
print(glue_dataset_card.metadata.get_metadata_for_config("mrpc"))
# DatasetMetatada object with no dictionaries since there are only the mrpc tags
```
Let me know what you think or if you have other ideas. |
https://github.com/huggingface/datasets/pull/2395 | `pretty_name` for dataset in YAML tags | I think Option 2 is better.
Just to clarify, will `get_metadata_for_config` also return common details, like language, say? | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. | 18 | text: `pretty_name` for dataset in YAML tags
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
I think Option 2 is better.
Just to clarify, will `get_metadata_for_config` also return common details, like language, say? |
https://github.com/huggingface/datasets/pull/2395 | `pretty_name` for dataset in YAML tags | > Just to clarify, will get_metadata_for_config also return common details, like language, say?
Yes that would be more convenient IMO. For example a dataset card like this
```yaml
languages:
- en
pretty_name:
config1: Pretty Name for Config 1
config3: Pretty Name for Config 2
```
then `metadat.get_metadata_for_config("config1")` would return something like
```python
DatasetMetadata(languages=["en"], pretty_name="Pretty Name for Config 1")
``` | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. | 59 | text: `pretty_name` for dataset in YAML tags
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
> Just to clarify, will get_metadata_for_config also return common details, like language, say?
Yes that would be more convenient IMO. For example a dataset card like this
```yaml
languages:
- en
pretty_name:
config1: Pretty Name for Config 1
config3: Pretty Name for Config 2
```
then `metadat.get_metadata_for_config("config1")` would return something like
```python
DatasetMetadata(languages=["en"], pretty_name="Pretty Name for Config 1")
``` |
https://github.com/huggingface/datasets/pull/2395 | `pretty_name` for dataset in YAML tags | @lhoestq, should we do this post-processing in `load_dataset_card` by returning unflattened dictionary from `DatasetMetadata` or send this from `DatasetMetadata`? Since there isn't much to do I feel once we have the unflattened dictionary | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. | 33 | text: `pretty_name` for dataset in YAML tags
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
@lhoestq, should we do this post-processing in `load_dataset_card` by returning unflattened dictionary from `DatasetMetadata` or send this from `DatasetMetadata`? Since there isn't much to do I feel once we have the unflattened dictionary |
https://github.com/huggingface/datasets/pull/2395 | `pretty_name` for dataset in YAML tags | I was talking about this unflattened dictionary:
> I was thinking of returning the whole dictionary before flattening so that user can access whatever they want with specific configs. Let's say [this](https://pastebin.com/eJ84314f) is my metadata_dict before flattening (the loaded YAML string), so instead of validating it and then returning the items individually we can return it just after loading the YAML string.
Post-processing meant extracting config-specific fields from this dictionary and then return this `languages=["en"], pretty_name="Pretty Name for Config 1"` | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. | 80 | text: `pretty_name` for dataset in YAML tags
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
I was talking about this unflattened dictionary:
> I was thinking of returning the whole dictionary before flattening so that user can access whatever they want with specific configs. Let's say [this](https://pastebin.com/eJ84314f) is my metadata_dict before flattening (the loaded YAML string), so instead of validating it and then returning the items individually we can return it just after loading the YAML string.
Post-processing meant extracting config-specific fields from this dictionary and then return this `languages=["en"], pretty_name="Pretty Name for Config 1"` |
https://github.com/huggingface/datasets/pull/2395 | `pretty_name` for dataset in YAML tags | I still don't understand what you mean by "returning unflattened dictionary from DatasetMetadata or send this from DatasetMetadata", sorry. Can you give an example or rephrase this ?
IMO load_dataset_card can return a dataset card object with a metadata field. If the metadata isn't flat (i.e. it has several configs), you can get the flat metadata of 1 specific config with `get_metadata_for_config`. But of course if you have better ideas or suggestions, we can discuss this | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. | 76 | text: `pretty_name` for dataset in YAML tags
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
I still don't understand what you mean by "returning unflattened dictionary from DatasetMetadata or send this from DatasetMetadata", sorry. Can you give an example or rephrase this ?
IMO load_dataset_card can return a dataset card object with a metadata field. If the metadata isn't flat (i.e. it has several configs), you can get the flat metadata of 1 specific config with `get_metadata_for_config`. But of course if you have better ideas or suggestions, we can discuss this |
https://github.com/huggingface/datasets/pull/2395 | `pretty_name` for dataset in YAML tags | @lhoestq, I think he is saying whatever `get_metadata_for_config` is doing can be done in `load_dataset_card` by taking the unflattened `metadata_dict` as input.
@bhavitvyamalik, I think it'd be better to have this "post-processing" in `DatasetMetadata` instead of `load_dataset_card`, as @lhoestq has shown. I'll quickly get on that.
---
Three things that are to be changed in `DatasetMetadata`:
1. Allow Non-flat elements and their validation.
2. Create a method to get metadata by config name.
3. Create a `validate()` method.
Once that is done, this PR can be updated and reviewed, wdys? | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. | 90 | text: `pretty_name` for dataset in YAML tags
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
@lhoestq, I think he is saying whatever `get_metadata_for_config` is doing can be done in `load_dataset_card` by taking the unflattened `metadata_dict` as input.
@bhavitvyamalik, I think it'd be better to have this "post-processing" in `DatasetMetadata` instead of `load_dataset_card`, as @lhoestq has shown. I'll quickly get on that.
---
Three things that are to be changed in `DatasetMetadata`:
1. Allow Non-flat elements and their validation.
2. Create a method to get metadata by config name.
3. Create a `validate()` method.
Once that is done, this PR can be updated and reviewed, wdys? |
https://github.com/huggingface/datasets/pull/2395 | `pretty_name` for dataset in YAML tags | Thanks @gchhablani for the help ! Now that https://github.com/huggingface/datasets/pull/2436 is merged you can remove the `-` in the pretty_name @bhavitvyamalik :) | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. | 21 | text: `pretty_name` for dataset in YAML tags
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
Thanks @gchhablani for the help ! Now that https://github.com/huggingface/datasets/pull/2436 is merged you can remove the `-` in the pretty_name @bhavitvyamalik :) |
https://github.com/huggingface/datasets/pull/2392 | Update text classification template labels in DatasetInfo __post_init__ | If I'm not mistaken, one way to fix this would be to drop the task templates when copying the info by inserting `dataset.info.task_templates = None` before the `Dataset.cast` call in `Dataset.prepare_for_task`. Moreover, we should do this change independently of the KeyError being raised because currently the following is possible:
```python
dset = load_dataset("some_dataset") # let's say 'some_dataset' supports text classification and question answering
dset_tc = dset.prepare_for_task("text-classification")
dset_tc.preprare_for_task("question-answering") # this should raise an error because the schema is no longer valid for this task; currently this fails on 'rename_columns'
```
I see 2 options:
1. to drop the task templates after the first `Dataset.prepare_for_task` call
2. to save only the tasks compatible with the new schema after Dataset.prepare_for_task` (but then we have to update the column names of the compatible tasks to make sure the column mapping is still valid) | This PR implements the idea discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`. The main reason for doing so is so avoid duplicating the label definitions in both `DatasetInfo.features` and `DatasetInfo.task_templates`.
To avoid storing state in `DatasetInfo.__post_init__`, the current implementation flushes `DatasetInfo.task_templates` before the features are cast in `Dataset.prepare_for_task` (thanks to @mariosasko for this idea!).
Here is an example of the current workflow:
```python
ds1 = load_dataset("./datasets/emotion/")
# cast features and flush templates
ds2 = ds1.prepare_for_task("text-classification")
assert ds2.info.task_templates is None
```
Note that if users want to pass a `TextClassification` template to `prepare_for_task`, we require them to set `TextClassification.labels` to match the dataset's features corresponding to `label_column`:
```python
ds1 = load_dataset("./datasets/emotion/")
# TextClassification.labels is None by default => invalid template
task = TextClassification(text_column="text", label_column="label")
# Raises ValueError
ds1.prepare_for_task(task)
# Specifying the labels => valid template
task = TextClassification(text_column="text", label_column="label", labels=['anger', 'fear', 'joy', 'love', 'sadness', 'surprise'])
ds1.prepare_for_task(task)
```
This PR also adds:
* New tests + fixed some old tests that weren't testing `assertRaises` properly
* A decorator to share docstrings across common functions. This allows us to document `DatasetDict.prepare_for_task` and `Dataset.prepare_for_task` in one place.
* Fixes to avoid side-effects from in-place replacements of `DatasetInfo.task_templates` in `DatasetInfo.__post_init__`. Thanks to @lhoestq for figuring this out!
* Removal of `FeaturesWithLazyClassLabel` since we now create a new instance of `TextClassification` in `DatasetInfo.__post_init__` and avoid the side-effects first pointed out by @mariosasko
### PR Description from original WIP
Hi @yjernite and @lhoestq, here's a first stab at the suggestion discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`.
One problem I've spotted is that my current implementation introduces state into the `__post_init__`:
* When we call `load_dataset`, `DatasetInfo.features` are the "raw" features without any casting so we can access the column names by the `label_column` specified in `TextClassification`
* When we call `Dataset.prepare_for_task` we run into a problem because the `DatasetInfo.features` are first cast into the new schema which triggers a `KeyError` when we update the infos [here](https://github.com/huggingface/datasets/blob/8b2a78520828e0cc13c14a31f413a5395ef25110/src/datasets/arrow_dataset.py#L1959).
Here's an explicit example of what I mean with the stack trace appended below:
```python
from datasets import load_dataset
# this works
ds = load_dataset("emotion")
# we can verify the task template is correctly set
ds["train"].info.task_templates # returns [TextClassification(labels=('sadness', 'joy', 'love', 'anger', 'fear', 'surprise'), text_column='text', label_column='label')]
# but this fails because the _post_init__ is looking for the original column names
ds.prepare_for_task("text-classification")
```
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-4-54a43019b319> in <module>
----> 1 ds.prepare_for_task("text-classification")
~/git/datasets/src/datasets/dataset_dict.py in prepare_for_task(self, task)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/arrow_dataset.py in prepare_for_task(self, task)
1421 dataset = self.remove_columns(columns_to_drop)
1422 dataset = dataset.rename_columns(column_mapping)
-> 1423 dataset = dataset.cast(features=template.features)
1424 return dataset
1425
~/git/datasets/src/datasets/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)
970 format = self.format
971 dataset = self.with_format("arrow")
--> 972 dataset = dataset.map(
973 lambda t: t.cast(schema),
974 batched=True,
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1583
1584 if num_proc is None or num_proc == 1:
-> 1585 return self._map_single(
1586 function=function,
1587 with_indices=with_indices,
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
173 }
174 # apply actual function
--> 175 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
176 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
177 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
338 # Call actual function
339
--> 340 out = func(self, *args, **kwargs)
341
342 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
1959 if update_data:
1960 # Create new Dataset from buffer or file
-> 1961 info = self.info.copy()
1962 info.features = writer._features
1963 if buf_writer is None:
~/git/datasets/src/datasets/info.py in copy(self)
274
275 def copy(self) -> "DatasetInfo":
--> 276 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
277
278
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
174 # The reason is that Dataset.prepare_for_task calls Dataset.cast which converts the
175 # DatasetInfo.features to the new schema and thus template.label_column is no longer a valid key
--> 176 object.__setattr__(template, "labels", tuple(self.features[template.label_column].names))
177 template.label_schema["labels"] = ClassLabel(names=template.labels)
178 self.task_templates[idx] = template
KeyError: 'label'
```
What do you think? I did this a bit quickly, so maybe I'm overlooking something obvious :) One thing would be to only update the labels of the task template on load, but this seems a bit hacky IMO | 139 | text: Update text classification template labels in DatasetInfo __post_init__
This PR implements the idea discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`. The main reason for doing so is so avoid duplicating the label definitions in both `DatasetInfo.features` and `DatasetInfo.task_templates`.
To avoid storing state in `DatasetInfo.__post_init__`, the current implementation flushes `DatasetInfo.task_templates` before the features are cast in `Dataset.prepare_for_task` (thanks to @mariosasko for this idea!).
Here is an example of the current workflow:
```python
ds1 = load_dataset("./datasets/emotion/")
# cast features and flush templates
ds2 = ds1.prepare_for_task("text-classification")
assert ds2.info.task_templates is None
```
Note that if users want to pass a `TextClassification` template to `prepare_for_task`, we require them to set `TextClassification.labels` to match the dataset's features corresponding to `label_column`:
```python
ds1 = load_dataset("./datasets/emotion/")
# TextClassification.labels is None by default => invalid template
task = TextClassification(text_column="text", label_column="label")
# Raises ValueError
ds1.prepare_for_task(task)
# Specifying the labels => valid template
task = TextClassification(text_column="text", label_column="label", labels=['anger', 'fear', 'joy', 'love', 'sadness', 'surprise'])
ds1.prepare_for_task(task)
```
This PR also adds:
* New tests + fixed some old tests that weren't testing `assertRaises` properly
* A decorator to share docstrings across common functions. This allows us to document `DatasetDict.prepare_for_task` and `Dataset.prepare_for_task` in one place.
* Fixes to avoid side-effects from in-place replacements of `DatasetInfo.task_templates` in `DatasetInfo.__post_init__`. Thanks to @lhoestq for figuring this out!
* Removal of `FeaturesWithLazyClassLabel` since we now create a new instance of `TextClassification` in `DatasetInfo.__post_init__` and avoid the side-effects first pointed out by @mariosasko
### PR Description from original WIP
Hi @yjernite and @lhoestq, here's a first stab at the suggestion discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`.
One problem I've spotted is that my current implementation introduces state into the `__post_init__`:
* When we call `load_dataset`, `DatasetInfo.features` are the "raw" features without any casting so we can access the column names by the `label_column` specified in `TextClassification`
* When we call `Dataset.prepare_for_task` we run into a problem because the `DatasetInfo.features` are first cast into the new schema which triggers a `KeyError` when we update the infos [here](https://github.com/huggingface/datasets/blob/8b2a78520828e0cc13c14a31f413a5395ef25110/src/datasets/arrow_dataset.py#L1959).
Here's an explicit example of what I mean with the stack trace appended below:
```python
from datasets import load_dataset
# this works
ds = load_dataset("emotion")
# we can verify the task template is correctly set
ds["train"].info.task_templates # returns [TextClassification(labels=('sadness', 'joy', 'love', 'anger', 'fear', 'surprise'), text_column='text', label_column='label')]
# but this fails because the _post_init__ is looking for the original column names
ds.prepare_for_task("text-classification")
```
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-4-54a43019b319> in <module>
----> 1 ds.prepare_for_task("text-classification")
~/git/datasets/src/datasets/dataset_dict.py in prepare_for_task(self, task)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/arrow_dataset.py in prepare_for_task(self, task)
1421 dataset = self.remove_columns(columns_to_drop)
1422 dataset = dataset.rename_columns(column_mapping)
-> 1423 dataset = dataset.cast(features=template.features)
1424 return dataset
1425
~/git/datasets/src/datasets/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)
970 format = self.format
971 dataset = self.with_format("arrow")
--> 972 dataset = dataset.map(
973 lambda t: t.cast(schema),
974 batched=True,
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1583
1584 if num_proc is None or num_proc == 1:
-> 1585 return self._map_single(
1586 function=function,
1587 with_indices=with_indices,
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
173 }
174 # apply actual function
--> 175 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
176 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
177 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
338 # Call actual function
339
--> 340 out = func(self, *args, **kwargs)
341
342 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
1959 if update_data:
1960 # Create new Dataset from buffer or file
-> 1961 info = self.info.copy()
1962 info.features = writer._features
1963 if buf_writer is None:
~/git/datasets/src/datasets/info.py in copy(self)
274
275 def copy(self) -> "DatasetInfo":
--> 276 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
277
278
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
174 # The reason is that Dataset.prepare_for_task calls Dataset.cast which converts the
175 # DatasetInfo.features to the new schema and thus template.label_column is no longer a valid key
--> 176 object.__setattr__(template, "labels", tuple(self.features[template.label_column].names))
177 template.label_schema["labels"] = ClassLabel(names=template.labels)
178 self.task_templates[idx] = template
KeyError: 'label'
```
What do you think? I did this a bit quickly, so maybe I'm overlooking something obvious :) One thing would be to only update the labels of the task template on load, but this seems a bit hacky IMO
If I'm not mistaken, one way to fix this would be to drop the task templates when copying the info by inserting `dataset.info.task_templates = None` before the `Dataset.cast` call in `Dataset.prepare_for_task`. Moreover, we should do this change independently of the KeyError being raised because currently the following is possible:
```python
dset = load_dataset("some_dataset") # let's say 'some_dataset' supports text classification and question answering
dset_tc = dset.prepare_for_task("text-classification")
dset_tc.preprare_for_task("question-answering") # this should raise an error because the schema is no longer valid for this task; currently this fails on 'rename_columns'
```
I see 2 options:
1. to drop the task templates after the first `Dataset.prepare_for_task` call
2. to save only the tasks compatible with the new schema after Dataset.prepare_for_task` (but then we have to update the column names of the compatible tasks to make sure the column mapping is still valid) |
https://github.com/huggingface/datasets/pull/2392 | Update text classification template labels in DatasetInfo __post_init__ | > If I'm not mistaken, one way to fix this would be to drop the task templates when copying the info by inserting `dataset.info.task_templates = None` before the `Dataset.cast` call in `Dataset.prepare_for_task`. Moreover, we should do this change independently of the KeyError being raised because currently the following is possible:
>
> ```python
> dset = load_dataset("some_dataset") # let's say 'some_dataset' supports text classification and question answering
> dset_tc = dset.prepare_for_task("text-classification")
> dset_tc.preprare_for_task("question-answering") # this should raise an error because the schema is no longer valid for this task; currently this fails on 'rename_columns'
> ```
>
> I see 2 options:
>
> 1. to drop the task templates after the first `Dataset.prepare_for_task` call
> 2. to save only the tasks compatible with the new schema after Dataset.prepare_for_task` (but then we have to update the column names of the compatible tasks to make sure the column mapping is still valid)
thanks for the great idea @mariosasko and for spotting the problem with sequential task preparation! i am in favour of your option (1) since it is simple and saves us from having to keep track of the column mappings across multiple steps.
i've implemented the change and refactored the tests to account for the new approach (including a new test that the templates are flushed after we call `prepare_for_task`). perhaps the slightly inelegant aspect here is that if we want to allow the user to set `labels` in the `TextClassification` template, then we have two places (`DatasetInfo.__post_init__` and `TextClassification.__post_init__`) where we need to update `label_schema`.
on the other hand, dropping `labels` from the `TextClassification` signature would have the nice effect that users only have to think about column names when defining their tasks.
in any case, i think it would be a good idea to merge #2376 soon as the current PR is touching a lot of the same places in the codebase π
| This PR implements the idea discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`. The main reason for doing so is so avoid duplicating the label definitions in both `DatasetInfo.features` and `DatasetInfo.task_templates`.
To avoid storing state in `DatasetInfo.__post_init__`, the current implementation flushes `DatasetInfo.task_templates` before the features are cast in `Dataset.prepare_for_task` (thanks to @mariosasko for this idea!).
Here is an example of the current workflow:
```python
ds1 = load_dataset("./datasets/emotion/")
# cast features and flush templates
ds2 = ds1.prepare_for_task("text-classification")
assert ds2.info.task_templates is None
```
Note that if users want to pass a `TextClassification` template to `prepare_for_task`, we require them to set `TextClassification.labels` to match the dataset's features corresponding to `label_column`:
```python
ds1 = load_dataset("./datasets/emotion/")
# TextClassification.labels is None by default => invalid template
task = TextClassification(text_column="text", label_column="label")
# Raises ValueError
ds1.prepare_for_task(task)
# Specifying the labels => valid template
task = TextClassification(text_column="text", label_column="label", labels=['anger', 'fear', 'joy', 'love', 'sadness', 'surprise'])
ds1.prepare_for_task(task)
```
This PR also adds:
* New tests + fixed some old tests that weren't testing `assertRaises` properly
* A decorator to share docstrings across common functions. This allows us to document `DatasetDict.prepare_for_task` and `Dataset.prepare_for_task` in one place.
* Fixes to avoid side-effects from in-place replacements of `DatasetInfo.task_templates` in `DatasetInfo.__post_init__`. Thanks to @lhoestq for figuring this out!
* Removal of `FeaturesWithLazyClassLabel` since we now create a new instance of `TextClassification` in `DatasetInfo.__post_init__` and avoid the side-effects first pointed out by @mariosasko
### PR Description from original WIP
Hi @yjernite and @lhoestq, here's a first stab at the suggestion discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`.
One problem I've spotted is that my current implementation introduces state into the `__post_init__`:
* When we call `load_dataset`, `DatasetInfo.features` are the "raw" features without any casting so we can access the column names by the `label_column` specified in `TextClassification`
* When we call `Dataset.prepare_for_task` we run into a problem because the `DatasetInfo.features` are first cast into the new schema which triggers a `KeyError` when we update the infos [here](https://github.com/huggingface/datasets/blob/8b2a78520828e0cc13c14a31f413a5395ef25110/src/datasets/arrow_dataset.py#L1959).
Here's an explicit example of what I mean with the stack trace appended below:
```python
from datasets import load_dataset
# this works
ds = load_dataset("emotion")
# we can verify the task template is correctly set
ds["train"].info.task_templates # returns [TextClassification(labels=('sadness', 'joy', 'love', 'anger', 'fear', 'surprise'), text_column='text', label_column='label')]
# but this fails because the _post_init__ is looking for the original column names
ds.prepare_for_task("text-classification")
```
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-4-54a43019b319> in <module>
----> 1 ds.prepare_for_task("text-classification")
~/git/datasets/src/datasets/dataset_dict.py in prepare_for_task(self, task)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/arrow_dataset.py in prepare_for_task(self, task)
1421 dataset = self.remove_columns(columns_to_drop)
1422 dataset = dataset.rename_columns(column_mapping)
-> 1423 dataset = dataset.cast(features=template.features)
1424 return dataset
1425
~/git/datasets/src/datasets/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)
970 format = self.format
971 dataset = self.with_format("arrow")
--> 972 dataset = dataset.map(
973 lambda t: t.cast(schema),
974 batched=True,
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1583
1584 if num_proc is None or num_proc == 1:
-> 1585 return self._map_single(
1586 function=function,
1587 with_indices=with_indices,
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
173 }
174 # apply actual function
--> 175 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
176 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
177 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
338 # Call actual function
339
--> 340 out = func(self, *args, **kwargs)
341
342 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
1959 if update_data:
1960 # Create new Dataset from buffer or file
-> 1961 info = self.info.copy()
1962 info.features = writer._features
1963 if buf_writer is None:
~/git/datasets/src/datasets/info.py in copy(self)
274
275 def copy(self) -> "DatasetInfo":
--> 276 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
277
278
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
174 # The reason is that Dataset.prepare_for_task calls Dataset.cast which converts the
175 # DatasetInfo.features to the new schema and thus template.label_column is no longer a valid key
--> 176 object.__setattr__(template, "labels", tuple(self.features[template.label_column].names))
177 template.label_schema["labels"] = ClassLabel(names=template.labels)
178 self.task_templates[idx] = template
KeyError: 'label'
```
What do you think? I did this a bit quickly, so maybe I'm overlooking something obvious :) One thing would be to only update the labels of the task template on load, but this seems a bit hacky IMO | 315 | text: Update text classification template labels in DatasetInfo __post_init__
This PR implements the idea discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`. The main reason for doing so is so avoid duplicating the label definitions in both `DatasetInfo.features` and `DatasetInfo.task_templates`.
To avoid storing state in `DatasetInfo.__post_init__`, the current implementation flushes `DatasetInfo.task_templates` before the features are cast in `Dataset.prepare_for_task` (thanks to @mariosasko for this idea!).
Here is an example of the current workflow:
```python
ds1 = load_dataset("./datasets/emotion/")
# cast features and flush templates
ds2 = ds1.prepare_for_task("text-classification")
assert ds2.info.task_templates is None
```
Note that if users want to pass a `TextClassification` template to `prepare_for_task`, we require them to set `TextClassification.labels` to match the dataset's features corresponding to `label_column`:
```python
ds1 = load_dataset("./datasets/emotion/")
# TextClassification.labels is None by default => invalid template
task = TextClassification(text_column="text", label_column="label")
# Raises ValueError
ds1.prepare_for_task(task)
# Specifying the labels => valid template
task = TextClassification(text_column="text", label_column="label", labels=['anger', 'fear', 'joy', 'love', 'sadness', 'surprise'])
ds1.prepare_for_task(task)
```
This PR also adds:
* New tests + fixed some old tests that weren't testing `assertRaises` properly
* A decorator to share docstrings across common functions. This allows us to document `DatasetDict.prepare_for_task` and `Dataset.prepare_for_task` in one place.
* Fixes to avoid side-effects from in-place replacements of `DatasetInfo.task_templates` in `DatasetInfo.__post_init__`. Thanks to @lhoestq for figuring this out!
* Removal of `FeaturesWithLazyClassLabel` since we now create a new instance of `TextClassification` in `DatasetInfo.__post_init__` and avoid the side-effects first pointed out by @mariosasko
### PR Description from original WIP
Hi @yjernite and @lhoestq, here's a first stab at the suggestion discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`.
One problem I've spotted is that my current implementation introduces state into the `__post_init__`:
* When we call `load_dataset`, `DatasetInfo.features` are the "raw" features without any casting so we can access the column names by the `label_column` specified in `TextClassification`
* When we call `Dataset.prepare_for_task` we run into a problem because the `DatasetInfo.features` are first cast into the new schema which triggers a `KeyError` when we update the infos [here](https://github.com/huggingface/datasets/blob/8b2a78520828e0cc13c14a31f413a5395ef25110/src/datasets/arrow_dataset.py#L1959).
Here's an explicit example of what I mean with the stack trace appended below:
```python
from datasets import load_dataset
# this works
ds = load_dataset("emotion")
# we can verify the task template is correctly set
ds["train"].info.task_templates # returns [TextClassification(labels=('sadness', 'joy', 'love', 'anger', 'fear', 'surprise'), text_column='text', label_column='label')]
# but this fails because the _post_init__ is looking for the original column names
ds.prepare_for_task("text-classification")
```
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-4-54a43019b319> in <module>
----> 1 ds.prepare_for_task("text-classification")
~/git/datasets/src/datasets/dataset_dict.py in prepare_for_task(self, task)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/arrow_dataset.py in prepare_for_task(self, task)
1421 dataset = self.remove_columns(columns_to_drop)
1422 dataset = dataset.rename_columns(column_mapping)
-> 1423 dataset = dataset.cast(features=template.features)
1424 return dataset
1425
~/git/datasets/src/datasets/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)
970 format = self.format
971 dataset = self.with_format("arrow")
--> 972 dataset = dataset.map(
973 lambda t: t.cast(schema),
974 batched=True,
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1583
1584 if num_proc is None or num_proc == 1:
-> 1585 return self._map_single(
1586 function=function,
1587 with_indices=with_indices,
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
173 }
174 # apply actual function
--> 175 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
176 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
177 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
338 # Call actual function
339
--> 340 out = func(self, *args, **kwargs)
341
342 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
1959 if update_data:
1960 # Create new Dataset from buffer or file
-> 1961 info = self.info.copy()
1962 info.features = writer._features
1963 if buf_writer is None:
~/git/datasets/src/datasets/info.py in copy(self)
274
275 def copy(self) -> "DatasetInfo":
--> 276 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
277
278
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
174 # The reason is that Dataset.prepare_for_task calls Dataset.cast which converts the
175 # DatasetInfo.features to the new schema and thus template.label_column is no longer a valid key
--> 176 object.__setattr__(template, "labels", tuple(self.features[template.label_column].names))
177 template.label_schema["labels"] = ClassLabel(names=template.labels)
178 self.task_templates[idx] = template
KeyError: 'label'
```
What do you think? I did this a bit quickly, so maybe I'm overlooking something obvious :) One thing would be to only update the labels of the task template on load, but this seems a bit hacky IMO
> If I'm not mistaken, one way to fix this would be to drop the task templates when copying the info by inserting `dataset.info.task_templates = None` before the `Dataset.cast` call in `Dataset.prepare_for_task`. Moreover, we should do this change independently of the KeyError being raised because currently the following is possible:
>
> ```python
> dset = load_dataset("some_dataset") # let's say 'some_dataset' supports text classification and question answering
> dset_tc = dset.prepare_for_task("text-classification")
> dset_tc.preprare_for_task("question-answering") # this should raise an error because the schema is no longer valid for this task; currently this fails on 'rename_columns'
> ```
>
> I see 2 options:
>
> 1. to drop the task templates after the first `Dataset.prepare_for_task` call
> 2. to save only the tasks compatible with the new schema after Dataset.prepare_for_task` (but then we have to update the column names of the compatible tasks to make sure the column mapping is still valid)
thanks for the great idea @mariosasko and for spotting the problem with sequential task preparation! i am in favour of your option (1) since it is simple and saves us from having to keep track of the column mappings across multiple steps.
i've implemented the change and refactored the tests to account for the new approach (including a new test that the templates are flushed after we call `prepare_for_task`). perhaps the slightly inelegant aspect here is that if we want to allow the user to set `labels` in the `TextClassification` template, then we have two places (`DatasetInfo.__post_init__` and `TextClassification.__post_init__`) where we need to update `label_schema`.
on the other hand, dropping `labels` from the `TextClassification` signature would have the nice effect that users only have to think about column names when defining their tasks.
in any case, i think it would be a good idea to merge #2376 soon as the current PR is touching a lot of the same places in the codebase π
|
https://github.com/huggingface/datasets/pull/2392 | Update text classification template labels in DatasetInfo __post_init__ | Tests are failing only because the `emotion` dataset card doesn't pass our dataset card validator (tags are missing), you can ignore this since it's unrelated to this PR. | This PR implements the idea discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`. The main reason for doing so is so avoid duplicating the label definitions in both `DatasetInfo.features` and `DatasetInfo.task_templates`.
To avoid storing state in `DatasetInfo.__post_init__`, the current implementation flushes `DatasetInfo.task_templates` before the features are cast in `Dataset.prepare_for_task` (thanks to @mariosasko for this idea!).
Here is an example of the current workflow:
```python
ds1 = load_dataset("./datasets/emotion/")
# cast features and flush templates
ds2 = ds1.prepare_for_task("text-classification")
assert ds2.info.task_templates is None
```
Note that if users want to pass a `TextClassification` template to `prepare_for_task`, we require them to set `TextClassification.labels` to match the dataset's features corresponding to `label_column`:
```python
ds1 = load_dataset("./datasets/emotion/")
# TextClassification.labels is None by default => invalid template
task = TextClassification(text_column="text", label_column="label")
# Raises ValueError
ds1.prepare_for_task(task)
# Specifying the labels => valid template
task = TextClassification(text_column="text", label_column="label", labels=['anger', 'fear', 'joy', 'love', 'sadness', 'surprise'])
ds1.prepare_for_task(task)
```
This PR also adds:
* New tests + fixed some old tests that weren't testing `assertRaises` properly
* A decorator to share docstrings across common functions. This allows us to document `DatasetDict.prepare_for_task` and `Dataset.prepare_for_task` in one place.
* Fixes to avoid side-effects from in-place replacements of `DatasetInfo.task_templates` in `DatasetInfo.__post_init__`. Thanks to @lhoestq for figuring this out!
* Removal of `FeaturesWithLazyClassLabel` since we now create a new instance of `TextClassification` in `DatasetInfo.__post_init__` and avoid the side-effects first pointed out by @mariosasko
### PR Description from original WIP
Hi @yjernite and @lhoestq, here's a first stab at the suggestion discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`.
One problem I've spotted is that my current implementation introduces state into the `__post_init__`:
* When we call `load_dataset`, `DatasetInfo.features` are the "raw" features without any casting so we can access the column names by the `label_column` specified in `TextClassification`
* When we call `Dataset.prepare_for_task` we run into a problem because the `DatasetInfo.features` are first cast into the new schema which triggers a `KeyError` when we update the infos [here](https://github.com/huggingface/datasets/blob/8b2a78520828e0cc13c14a31f413a5395ef25110/src/datasets/arrow_dataset.py#L1959).
Here's an explicit example of what I mean with the stack trace appended below:
```python
from datasets import load_dataset
# this works
ds = load_dataset("emotion")
# we can verify the task template is correctly set
ds["train"].info.task_templates # returns [TextClassification(labels=('sadness', 'joy', 'love', 'anger', 'fear', 'surprise'), text_column='text', label_column='label')]
# but this fails because the _post_init__ is looking for the original column names
ds.prepare_for_task("text-classification")
```
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-4-54a43019b319> in <module>
----> 1 ds.prepare_for_task("text-classification")
~/git/datasets/src/datasets/dataset_dict.py in prepare_for_task(self, task)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/arrow_dataset.py in prepare_for_task(self, task)
1421 dataset = self.remove_columns(columns_to_drop)
1422 dataset = dataset.rename_columns(column_mapping)
-> 1423 dataset = dataset.cast(features=template.features)
1424 return dataset
1425
~/git/datasets/src/datasets/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)
970 format = self.format
971 dataset = self.with_format("arrow")
--> 972 dataset = dataset.map(
973 lambda t: t.cast(schema),
974 batched=True,
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1583
1584 if num_proc is None or num_proc == 1:
-> 1585 return self._map_single(
1586 function=function,
1587 with_indices=with_indices,
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
173 }
174 # apply actual function
--> 175 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
176 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
177 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
338 # Call actual function
339
--> 340 out = func(self, *args, **kwargs)
341
342 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
1959 if update_data:
1960 # Create new Dataset from buffer or file
-> 1961 info = self.info.copy()
1962 info.features = writer._features
1963 if buf_writer is None:
~/git/datasets/src/datasets/info.py in copy(self)
274
275 def copy(self) -> "DatasetInfo":
--> 276 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
277
278
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
174 # The reason is that Dataset.prepare_for_task calls Dataset.cast which converts the
175 # DatasetInfo.features to the new schema and thus template.label_column is no longer a valid key
--> 176 object.__setattr__(template, "labels", tuple(self.features[template.label_column].names))
177 template.label_schema["labels"] = ClassLabel(names=template.labels)
178 self.task_templates[idx] = template
KeyError: 'label'
```
What do you think? I did this a bit quickly, so maybe I'm overlooking something obvious :) One thing would be to only update the labels of the task template on load, but this seems a bit hacky IMO | 28 | text: Update text classification template labels in DatasetInfo __post_init__
This PR implements the idea discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`. The main reason for doing so is so avoid duplicating the label definitions in both `DatasetInfo.features` and `DatasetInfo.task_templates`.
To avoid storing state in `DatasetInfo.__post_init__`, the current implementation flushes `DatasetInfo.task_templates` before the features are cast in `Dataset.prepare_for_task` (thanks to @mariosasko for this idea!).
Here is an example of the current workflow:
```python
ds1 = load_dataset("./datasets/emotion/")
# cast features and flush templates
ds2 = ds1.prepare_for_task("text-classification")
assert ds2.info.task_templates is None
```
Note that if users want to pass a `TextClassification` template to `prepare_for_task`, we require them to set `TextClassification.labels` to match the dataset's features corresponding to `label_column`:
```python
ds1 = load_dataset("./datasets/emotion/")
# TextClassification.labels is None by default => invalid template
task = TextClassification(text_column="text", label_column="label")
# Raises ValueError
ds1.prepare_for_task(task)
# Specifying the labels => valid template
task = TextClassification(text_column="text", label_column="label", labels=['anger', 'fear', 'joy', 'love', 'sadness', 'surprise'])
ds1.prepare_for_task(task)
```
This PR also adds:
* New tests + fixed some old tests that weren't testing `assertRaises` properly
* A decorator to share docstrings across common functions. This allows us to document `DatasetDict.prepare_for_task` and `Dataset.prepare_for_task` in one place.
* Fixes to avoid side-effects from in-place replacements of `DatasetInfo.task_templates` in `DatasetInfo.__post_init__`. Thanks to @lhoestq for figuring this out!
* Removal of `FeaturesWithLazyClassLabel` since we now create a new instance of `TextClassification` in `DatasetInfo.__post_init__` and avoid the side-effects first pointed out by @mariosasko
### PR Description from original WIP
Hi @yjernite and @lhoestq, here's a first stab at the suggestion discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`.
One problem I've spotted is that my current implementation introduces state into the `__post_init__`:
* When we call `load_dataset`, `DatasetInfo.features` are the "raw" features without any casting so we can access the column names by the `label_column` specified in `TextClassification`
* When we call `Dataset.prepare_for_task` we run into a problem because the `DatasetInfo.features` are first cast into the new schema which triggers a `KeyError` when we update the infos [here](https://github.com/huggingface/datasets/blob/8b2a78520828e0cc13c14a31f413a5395ef25110/src/datasets/arrow_dataset.py#L1959).
Here's an explicit example of what I mean with the stack trace appended below:
```python
from datasets import load_dataset
# this works
ds = load_dataset("emotion")
# we can verify the task template is correctly set
ds["train"].info.task_templates # returns [TextClassification(labels=('sadness', 'joy', 'love', 'anger', 'fear', 'surprise'), text_column='text', label_column='label')]
# but this fails because the _post_init__ is looking for the original column names
ds.prepare_for_task("text-classification")
```
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-4-54a43019b319> in <module>
----> 1 ds.prepare_for_task("text-classification")
~/git/datasets/src/datasets/dataset_dict.py in prepare_for_task(self, task)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/arrow_dataset.py in prepare_for_task(self, task)
1421 dataset = self.remove_columns(columns_to_drop)
1422 dataset = dataset.rename_columns(column_mapping)
-> 1423 dataset = dataset.cast(features=template.features)
1424 return dataset
1425
~/git/datasets/src/datasets/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)
970 format = self.format
971 dataset = self.with_format("arrow")
--> 972 dataset = dataset.map(
973 lambda t: t.cast(schema),
974 batched=True,
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1583
1584 if num_proc is None or num_proc == 1:
-> 1585 return self._map_single(
1586 function=function,
1587 with_indices=with_indices,
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
173 }
174 # apply actual function
--> 175 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
176 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
177 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
338 # Call actual function
339
--> 340 out = func(self, *args, **kwargs)
341
342 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
1959 if update_data:
1960 # Create new Dataset from buffer or file
-> 1961 info = self.info.copy()
1962 info.features = writer._features
1963 if buf_writer is None:
~/git/datasets/src/datasets/info.py in copy(self)
274
275 def copy(self) -> "DatasetInfo":
--> 276 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
277
278
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
174 # The reason is that Dataset.prepare_for_task calls Dataset.cast which converts the
175 # DatasetInfo.features to the new schema and thus template.label_column is no longer a valid key
--> 176 object.__setattr__(template, "labels", tuple(self.features[template.label_column].names))
177 template.label_schema["labels"] = ClassLabel(names=template.labels)
178 self.task_templates[idx] = template
KeyError: 'label'
```
What do you think? I did this a bit quickly, so maybe I'm overlooking something obvious :) One thing would be to only update the labels of the task template on load, but this seems a bit hacky IMO
Tests are failing only because the `emotion` dataset card doesn't pass our dataset card validator (tags are missing), you can ignore this since it's unrelated to this PR. |
https://github.com/huggingface/datasets/pull/2392 | Update text classification template labels in DatasetInfo __post_init__ | @lhoestq @SBrandeis i've fixed the tests and think this is now in a good state for another review :) | This PR implements the idea discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`. The main reason for doing so is so avoid duplicating the label definitions in both `DatasetInfo.features` and `DatasetInfo.task_templates`.
To avoid storing state in `DatasetInfo.__post_init__`, the current implementation flushes `DatasetInfo.task_templates` before the features are cast in `Dataset.prepare_for_task` (thanks to @mariosasko for this idea!).
Here is an example of the current workflow:
```python
ds1 = load_dataset("./datasets/emotion/")
# cast features and flush templates
ds2 = ds1.prepare_for_task("text-classification")
assert ds2.info.task_templates is None
```
Note that if users want to pass a `TextClassification` template to `prepare_for_task`, we require them to set `TextClassification.labels` to match the dataset's features corresponding to `label_column`:
```python
ds1 = load_dataset("./datasets/emotion/")
# TextClassification.labels is None by default => invalid template
task = TextClassification(text_column="text", label_column="label")
# Raises ValueError
ds1.prepare_for_task(task)
# Specifying the labels => valid template
task = TextClassification(text_column="text", label_column="label", labels=['anger', 'fear', 'joy', 'love', 'sadness', 'surprise'])
ds1.prepare_for_task(task)
```
This PR also adds:
* New tests + fixed some old tests that weren't testing `assertRaises` properly
* A decorator to share docstrings across common functions. This allows us to document `DatasetDict.prepare_for_task` and `Dataset.prepare_for_task` in one place.
* Fixes to avoid side-effects from in-place replacements of `DatasetInfo.task_templates` in `DatasetInfo.__post_init__`. Thanks to @lhoestq for figuring this out!
* Removal of `FeaturesWithLazyClassLabel` since we now create a new instance of `TextClassification` in `DatasetInfo.__post_init__` and avoid the side-effects first pointed out by @mariosasko
### PR Description from original WIP
Hi @yjernite and @lhoestq, here's a first stab at the suggestion discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`.
One problem I've spotted is that my current implementation introduces state into the `__post_init__`:
* When we call `load_dataset`, `DatasetInfo.features` are the "raw" features without any casting so we can access the column names by the `label_column` specified in `TextClassification`
* When we call `Dataset.prepare_for_task` we run into a problem because the `DatasetInfo.features` are first cast into the new schema which triggers a `KeyError` when we update the infos [here](https://github.com/huggingface/datasets/blob/8b2a78520828e0cc13c14a31f413a5395ef25110/src/datasets/arrow_dataset.py#L1959).
Here's an explicit example of what I mean with the stack trace appended below:
```python
from datasets import load_dataset
# this works
ds = load_dataset("emotion")
# we can verify the task template is correctly set
ds["train"].info.task_templates # returns [TextClassification(labels=('sadness', 'joy', 'love', 'anger', 'fear', 'surprise'), text_column='text', label_column='label')]
# but this fails because the _post_init__ is looking for the original column names
ds.prepare_for_task("text-classification")
```
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-4-54a43019b319> in <module>
----> 1 ds.prepare_for_task("text-classification")
~/git/datasets/src/datasets/dataset_dict.py in prepare_for_task(self, task)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/arrow_dataset.py in prepare_for_task(self, task)
1421 dataset = self.remove_columns(columns_to_drop)
1422 dataset = dataset.rename_columns(column_mapping)
-> 1423 dataset = dataset.cast(features=template.features)
1424 return dataset
1425
~/git/datasets/src/datasets/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)
970 format = self.format
971 dataset = self.with_format("arrow")
--> 972 dataset = dataset.map(
973 lambda t: t.cast(schema),
974 batched=True,
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1583
1584 if num_proc is None or num_proc == 1:
-> 1585 return self._map_single(
1586 function=function,
1587 with_indices=with_indices,
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
173 }
174 # apply actual function
--> 175 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
176 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
177 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
338 # Call actual function
339
--> 340 out = func(self, *args, **kwargs)
341
342 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
1959 if update_data:
1960 # Create new Dataset from buffer or file
-> 1961 info = self.info.copy()
1962 info.features = writer._features
1963 if buf_writer is None:
~/git/datasets/src/datasets/info.py in copy(self)
274
275 def copy(self) -> "DatasetInfo":
--> 276 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
277
278
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
174 # The reason is that Dataset.prepare_for_task calls Dataset.cast which converts the
175 # DatasetInfo.features to the new schema and thus template.label_column is no longer a valid key
--> 176 object.__setattr__(template, "labels", tuple(self.features[template.label_column].names))
177 template.label_schema["labels"] = ClassLabel(names=template.labels)
178 self.task_templates[idx] = template
KeyError: 'label'
```
What do you think? I did this a bit quickly, so maybe I'm overlooking something obvious :) One thing would be to only update the labels of the task template on load, but this seems a bit hacky IMO | 19 | text: Update text classification template labels in DatasetInfo __post_init__
This PR implements the idea discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`. The main reason for doing so is so avoid duplicating the label definitions in both `DatasetInfo.features` and `DatasetInfo.task_templates`.
To avoid storing state in `DatasetInfo.__post_init__`, the current implementation flushes `DatasetInfo.task_templates` before the features are cast in `Dataset.prepare_for_task` (thanks to @mariosasko for this idea!).
Here is an example of the current workflow:
```python
ds1 = load_dataset("./datasets/emotion/")
# cast features and flush templates
ds2 = ds1.prepare_for_task("text-classification")
assert ds2.info.task_templates is None
```
Note that if users want to pass a `TextClassification` template to `prepare_for_task`, we require them to set `TextClassification.labels` to match the dataset's features corresponding to `label_column`:
```python
ds1 = load_dataset("./datasets/emotion/")
# TextClassification.labels is None by default => invalid template
task = TextClassification(text_column="text", label_column="label")
# Raises ValueError
ds1.prepare_for_task(task)
# Specifying the labels => valid template
task = TextClassification(text_column="text", label_column="label", labels=['anger', 'fear', 'joy', 'love', 'sadness', 'surprise'])
ds1.prepare_for_task(task)
```
This PR also adds:
* New tests + fixed some old tests that weren't testing `assertRaises` properly
* A decorator to share docstrings across common functions. This allows us to document `DatasetDict.prepare_for_task` and `Dataset.prepare_for_task` in one place.
* Fixes to avoid side-effects from in-place replacements of `DatasetInfo.task_templates` in `DatasetInfo.__post_init__`. Thanks to @lhoestq for figuring this out!
* Removal of `FeaturesWithLazyClassLabel` since we now create a new instance of `TextClassification` in `DatasetInfo.__post_init__` and avoid the side-effects first pointed out by @mariosasko
### PR Description from original WIP
Hi @yjernite and @lhoestq, here's a first stab at the suggestion discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`.
One problem I've spotted is that my current implementation introduces state into the `__post_init__`:
* When we call `load_dataset`, `DatasetInfo.features` are the "raw" features without any casting so we can access the column names by the `label_column` specified in `TextClassification`
* When we call `Dataset.prepare_for_task` we run into a problem because the `DatasetInfo.features` are first cast into the new schema which triggers a `KeyError` when we update the infos [here](https://github.com/huggingface/datasets/blob/8b2a78520828e0cc13c14a31f413a5395ef25110/src/datasets/arrow_dataset.py#L1959).
Here's an explicit example of what I mean with the stack trace appended below:
```python
from datasets import load_dataset
# this works
ds = load_dataset("emotion")
# we can verify the task template is correctly set
ds["train"].info.task_templates # returns [TextClassification(labels=('sadness', 'joy', 'love', 'anger', 'fear', 'surprise'), text_column='text', label_column='label')]
# but this fails because the _post_init__ is looking for the original column names
ds.prepare_for_task("text-classification")
```
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-4-54a43019b319> in <module>
----> 1 ds.prepare_for_task("text-classification")
~/git/datasets/src/datasets/dataset_dict.py in prepare_for_task(self, task)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/arrow_dataset.py in prepare_for_task(self, task)
1421 dataset = self.remove_columns(columns_to_drop)
1422 dataset = dataset.rename_columns(column_mapping)
-> 1423 dataset = dataset.cast(features=template.features)
1424 return dataset
1425
~/git/datasets/src/datasets/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)
970 format = self.format
971 dataset = self.with_format("arrow")
--> 972 dataset = dataset.map(
973 lambda t: t.cast(schema),
974 batched=True,
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1583
1584 if num_proc is None or num_proc == 1:
-> 1585 return self._map_single(
1586 function=function,
1587 with_indices=with_indices,
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
173 }
174 # apply actual function
--> 175 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
176 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
177 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
338 # Call actual function
339
--> 340 out = func(self, *args, **kwargs)
341
342 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
1959 if update_data:
1960 # Create new Dataset from buffer or file
-> 1961 info = self.info.copy()
1962 info.features = writer._features
1963 if buf_writer is None:
~/git/datasets/src/datasets/info.py in copy(self)
274
275 def copy(self) -> "DatasetInfo":
--> 276 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
277
278
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
174 # The reason is that Dataset.prepare_for_task calls Dataset.cast which converts the
175 # DatasetInfo.features to the new schema and thus template.label_column is no longer a valid key
--> 176 object.__setattr__(template, "labels", tuple(self.features[template.label_column].names))
177 template.label_schema["labels"] = ClassLabel(names=template.labels)
178 self.task_templates[idx] = template
KeyError: 'label'
```
What do you think? I did this a bit quickly, so maybe I'm overlooking something obvious :) One thing would be to only update the labels of the task template on load, but this seems a bit hacky IMO
@lhoestq @SBrandeis i've fixed the tests and think this is now in a good state for another review :) |
https://github.com/huggingface/datasets/pull/2392 | Update text classification template labels in DatasetInfo __post_init__ | Maybe @SBrandeis you can also take a look to make sure you're fine with it ? | This PR implements the idea discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`. The main reason for doing so is so avoid duplicating the label definitions in both `DatasetInfo.features` and `DatasetInfo.task_templates`.
To avoid storing state in `DatasetInfo.__post_init__`, the current implementation flushes `DatasetInfo.task_templates` before the features are cast in `Dataset.prepare_for_task` (thanks to @mariosasko for this idea!).
Here is an example of the current workflow:
```python
ds1 = load_dataset("./datasets/emotion/")
# cast features and flush templates
ds2 = ds1.prepare_for_task("text-classification")
assert ds2.info.task_templates is None
```
Note that if users want to pass a `TextClassification` template to `prepare_for_task`, we require them to set `TextClassification.labels` to match the dataset's features corresponding to `label_column`:
```python
ds1 = load_dataset("./datasets/emotion/")
# TextClassification.labels is None by default => invalid template
task = TextClassification(text_column="text", label_column="label")
# Raises ValueError
ds1.prepare_for_task(task)
# Specifying the labels => valid template
task = TextClassification(text_column="text", label_column="label", labels=['anger', 'fear', 'joy', 'love', 'sadness', 'surprise'])
ds1.prepare_for_task(task)
```
This PR also adds:
* New tests + fixed some old tests that weren't testing `assertRaises` properly
* A decorator to share docstrings across common functions. This allows us to document `DatasetDict.prepare_for_task` and `Dataset.prepare_for_task` in one place.
* Fixes to avoid side-effects from in-place replacements of `DatasetInfo.task_templates` in `DatasetInfo.__post_init__`. Thanks to @lhoestq for figuring this out!
* Removal of `FeaturesWithLazyClassLabel` since we now create a new instance of `TextClassification` in `DatasetInfo.__post_init__` and avoid the side-effects first pointed out by @mariosasko
### PR Description from original WIP
Hi @yjernite and @lhoestq, here's a first stab at the suggestion discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`.
One problem I've spotted is that my current implementation introduces state into the `__post_init__`:
* When we call `load_dataset`, `DatasetInfo.features` are the "raw" features without any casting so we can access the column names by the `label_column` specified in `TextClassification`
* When we call `Dataset.prepare_for_task` we run into a problem because the `DatasetInfo.features` are first cast into the new schema which triggers a `KeyError` when we update the infos [here](https://github.com/huggingface/datasets/blob/8b2a78520828e0cc13c14a31f413a5395ef25110/src/datasets/arrow_dataset.py#L1959).
Here's an explicit example of what I mean with the stack trace appended below:
```python
from datasets import load_dataset
# this works
ds = load_dataset("emotion")
# we can verify the task template is correctly set
ds["train"].info.task_templates # returns [TextClassification(labels=('sadness', 'joy', 'love', 'anger', 'fear', 'surprise'), text_column='text', label_column='label')]
# but this fails because the _post_init__ is looking for the original column names
ds.prepare_for_task("text-classification")
```
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-4-54a43019b319> in <module>
----> 1 ds.prepare_for_task("text-classification")
~/git/datasets/src/datasets/dataset_dict.py in prepare_for_task(self, task)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/arrow_dataset.py in prepare_for_task(self, task)
1421 dataset = self.remove_columns(columns_to_drop)
1422 dataset = dataset.rename_columns(column_mapping)
-> 1423 dataset = dataset.cast(features=template.features)
1424 return dataset
1425
~/git/datasets/src/datasets/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)
970 format = self.format
971 dataset = self.with_format("arrow")
--> 972 dataset = dataset.map(
973 lambda t: t.cast(schema),
974 batched=True,
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1583
1584 if num_proc is None or num_proc == 1:
-> 1585 return self._map_single(
1586 function=function,
1587 with_indices=with_indices,
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
173 }
174 # apply actual function
--> 175 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
176 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
177 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
338 # Call actual function
339
--> 340 out = func(self, *args, **kwargs)
341
342 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
1959 if update_data:
1960 # Create new Dataset from buffer or file
-> 1961 info = self.info.copy()
1962 info.features = writer._features
1963 if buf_writer is None:
~/git/datasets/src/datasets/info.py in copy(self)
274
275 def copy(self) -> "DatasetInfo":
--> 276 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
277
278
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
174 # The reason is that Dataset.prepare_for_task calls Dataset.cast which converts the
175 # DatasetInfo.features to the new schema and thus template.label_column is no longer a valid key
--> 176 object.__setattr__(template, "labels", tuple(self.features[template.label_column].names))
177 template.label_schema["labels"] = ClassLabel(names=template.labels)
178 self.task_templates[idx] = template
KeyError: 'label'
```
What do you think? I did this a bit quickly, so maybe I'm overlooking something obvious :) One thing would be to only update the labels of the task template on load, but this seems a bit hacky IMO | 16 | text: Update text classification template labels in DatasetInfo __post_init__
This PR implements the idea discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`. The main reason for doing so is so avoid duplicating the label definitions in both `DatasetInfo.features` and `DatasetInfo.task_templates`.
To avoid storing state in `DatasetInfo.__post_init__`, the current implementation flushes `DatasetInfo.task_templates` before the features are cast in `Dataset.prepare_for_task` (thanks to @mariosasko for this idea!).
Here is an example of the current workflow:
```python
ds1 = load_dataset("./datasets/emotion/")
# cast features and flush templates
ds2 = ds1.prepare_for_task("text-classification")
assert ds2.info.task_templates is None
```
Note that if users want to pass a `TextClassification` template to `prepare_for_task`, we require them to set `TextClassification.labels` to match the dataset's features corresponding to `label_column`:
```python
ds1 = load_dataset("./datasets/emotion/")
# TextClassification.labels is None by default => invalid template
task = TextClassification(text_column="text", label_column="label")
# Raises ValueError
ds1.prepare_for_task(task)
# Specifying the labels => valid template
task = TextClassification(text_column="text", label_column="label", labels=['anger', 'fear', 'joy', 'love', 'sadness', 'surprise'])
ds1.prepare_for_task(task)
```
This PR also adds:
* New tests + fixed some old tests that weren't testing `assertRaises` properly
* A decorator to share docstrings across common functions. This allows us to document `DatasetDict.prepare_for_task` and `Dataset.prepare_for_task` in one place.
* Fixes to avoid side-effects from in-place replacements of `DatasetInfo.task_templates` in `DatasetInfo.__post_init__`. Thanks to @lhoestq for figuring this out!
* Removal of `FeaturesWithLazyClassLabel` since we now create a new instance of `TextClassification` in `DatasetInfo.__post_init__` and avoid the side-effects first pointed out by @mariosasko
### PR Description from original WIP
Hi @yjernite and @lhoestq, here's a first stab at the suggestion discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`.
One problem I've spotted is that my current implementation introduces state into the `__post_init__`:
* When we call `load_dataset`, `DatasetInfo.features` are the "raw" features without any casting so we can access the column names by the `label_column` specified in `TextClassification`
* When we call `Dataset.prepare_for_task` we run into a problem because the `DatasetInfo.features` are first cast into the new schema which triggers a `KeyError` when we update the infos [here](https://github.com/huggingface/datasets/blob/8b2a78520828e0cc13c14a31f413a5395ef25110/src/datasets/arrow_dataset.py#L1959).
Here's an explicit example of what I mean with the stack trace appended below:
```python
from datasets import load_dataset
# this works
ds = load_dataset("emotion")
# we can verify the task template is correctly set
ds["train"].info.task_templates # returns [TextClassification(labels=('sadness', 'joy', 'love', 'anger', 'fear', 'surprise'), text_column='text', label_column='label')]
# but this fails because the _post_init__ is looking for the original column names
ds.prepare_for_task("text-classification")
```
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-4-54a43019b319> in <module>
----> 1 ds.prepare_for_task("text-classification")
~/git/datasets/src/datasets/dataset_dict.py in prepare_for_task(self, task)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/arrow_dataset.py in prepare_for_task(self, task)
1421 dataset = self.remove_columns(columns_to_drop)
1422 dataset = dataset.rename_columns(column_mapping)
-> 1423 dataset = dataset.cast(features=template.features)
1424 return dataset
1425
~/git/datasets/src/datasets/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)
970 format = self.format
971 dataset = self.with_format("arrow")
--> 972 dataset = dataset.map(
973 lambda t: t.cast(schema),
974 batched=True,
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1583
1584 if num_proc is None or num_proc == 1:
-> 1585 return self._map_single(
1586 function=function,
1587 with_indices=with_indices,
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
173 }
174 # apply actual function
--> 175 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
176 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
177 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
338 # Call actual function
339
--> 340 out = func(self, *args, **kwargs)
341
342 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
1959 if update_data:
1960 # Create new Dataset from buffer or file
-> 1961 info = self.info.copy()
1962 info.features = writer._features
1963 if buf_writer is None:
~/git/datasets/src/datasets/info.py in copy(self)
274
275 def copy(self) -> "DatasetInfo":
--> 276 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
277
278
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
174 # The reason is that Dataset.prepare_for_task calls Dataset.cast which converts the
175 # DatasetInfo.features to the new schema and thus template.label_column is no longer a valid key
--> 176 object.__setattr__(template, "labels", tuple(self.features[template.label_column].names))
177 template.label_schema["labels"] = ClassLabel(names=template.labels)
178 self.task_templates[idx] = template
KeyError: 'label'
```
What do you think? I did this a bit quickly, so maybe I'm overlooking something obvious :) One thing would be to only update the labels of the task template on load, but this seems a bit hacky IMO
Maybe @SBrandeis you can also take a look to make sure you're fine with it ? |
https://github.com/huggingface/datasets/pull/2389 | Insert task templates for text classification | You might have thought about this already, but would it make sense to use the `datasets.features.ClassLabel` values when possible instead of declaring the list once for the `feature` and once for the `template`? | This PR inserts text-classification templates for datasets with the following properties:
* Only one config
* At most two features of `(Value, ClassLabel)` type
Note that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR | 33 | text: Insert task templates for text classification
This PR inserts text-classification templates for datasets with the following properties:
* Only one config
* At most two features of `(Value, ClassLabel)` type
Note that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR
You might have thought about this already, but would it make sense to use the `datasets.features.ClassLabel` values when possible instead of declaring the list once for the `feature` and once for the `template`? |
https://github.com/huggingface/datasets/pull/2389 | Insert task templates for text classification | > You might have thought about this already, but would it make sense to use the `datasets.features.ClassLabel` values when possible instead of declaring the list once for the `feature` and once for the `template`?
hi @yjernite, these code insertions are auto-generated so could certainly be improved :)
just so i understand, your idea is that instead of doing something like
```python
class AGNews(datasets.GeneratorBasedBuilder):
"""AG News topic classification dataset."""
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"text": datasets.Value("string"),
"label": datasets.features.ClassLabel(
names=["World", "Sports", "Business", "Sci/Tech"]
),
}
),
homepage="http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html",
citation=_CITATION,
task_templates=[
TextClassification(
labels=("Business", "Sci/Tech", "Sports", "World"),
text_column="text",
label_column="label",
)
],
)
```
we could do the following:
```python
class AGNews(datasets.GeneratorBasedBuilder):
"""AG News topic classification dataset."""
def _info(self):
info = datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"text": datasets.Value("string"),
"label": datasets.features.ClassLabel(
names=["World", "Sports", "Business", "Sci/Tech"]
),
}
),
homepage="http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html",
citation=_CITATION,
)
info.task_templates = [
TextClassification(
labels=info.features.names,
text_column="text",
label_column="label",
)
]
return info
```
| This PR inserts text-classification templates for datasets with the following properties:
* Only one config
* At most two features of `(Value, ClassLabel)` type
Note that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR | 147 | text: Insert task templates for text classification
This PR inserts text-classification templates for datasets with the following properties:
* Only one config
* At most two features of `(Value, ClassLabel)` type
Note that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR
> You might have thought about this already, but would it make sense to use the `datasets.features.ClassLabel` values when possible instead of declaring the list once for the `feature` and once for the `template`?
hi @yjernite, these code insertions are auto-generated so could certainly be improved :)
just so i understand, your idea is that instead of doing something like
```python
class AGNews(datasets.GeneratorBasedBuilder):
"""AG News topic classification dataset."""
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"text": datasets.Value("string"),
"label": datasets.features.ClassLabel(
names=["World", "Sports", "Business", "Sci/Tech"]
),
}
),
homepage="http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html",
citation=_CITATION,
task_templates=[
TextClassification(
labels=("Business", "Sci/Tech", "Sports", "World"),
text_column="text",
label_column="label",
)
],
)
```
we could do the following:
```python
class AGNews(datasets.GeneratorBasedBuilder):
"""AG News topic classification dataset."""
def _info(self):
info = datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"text": datasets.Value("string"),
"label": datasets.features.ClassLabel(
names=["World", "Sports", "Business", "Sci/Tech"]
),
}
),
homepage="http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html",
citation=_CITATION,
)
info.task_templates = [
TextClassification(
labels=info.features.names,
text_column="text",
label_column="label",
)
]
return info
```
|
https://github.com/huggingface/datasets/pull/2389 | Insert task templates for text classification | Or we could simply not specify the labels and update the template in the DatasetInfo postinit to give it the labels ? | This PR inserts text-classification templates for datasets with the following properties:
* Only one config
* At most two features of `(Value, ClassLabel)` type
Note that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR | 22 | text: Insert task templates for text classification
This PR inserts text-classification templates for datasets with the following properties:
* Only one config
* At most two features of `(Value, ClassLabel)` type
Note that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR
Or we could simply not specify the labels and update the template in the DatasetInfo postinit to give it the labels ? |
https://github.com/huggingface/datasets/pull/2389 | Insert task templates for text classification | > Or we could simply not specify the labels and update the template in the DatasetInfo postinit to give it the labels ?
Oh yes, that would be great! It does mean enforcing that people use the right feature type (sometimes people still use a `string` feature still because they don't want to enumerate the classes, but I guess you've been catching most of those in reviews @lhoestq )
There might be reasons where there should be a legitimate difference, but I can't really think of nay right now, and we can always duplicate the feature | This PR inserts text-classification templates for datasets with the following properties:
* Only one config
* At most two features of `(Value, ClassLabel)` type
Note that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR | 96 | text: Insert task templates for text classification
This PR inserts text-classification templates for datasets with the following properties:
* Only one config
* At most two features of `(Value, ClassLabel)` type
Note that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR
> Or we could simply not specify the labels and update the template in the DatasetInfo postinit to give it the labels ?
Oh yes, that would be great! It does mean enforcing that people use the right feature type (sometimes people still use a `string` feature still because they don't want to enumerate the classes, but I guess you've been catching most of those in reviews @lhoestq )
There might be reasons where there should be a legitimate difference, but I can't really think of nay right now, and we can always duplicate the feature |
https://github.com/huggingface/datasets/pull/2389 | Insert task templates for text classification | Let's ignore the CI fails since they are unrelated to your changes. They're about dataset cards issues | This PR inserts text-classification templates for datasets with the following properties:
* Only one config
* At most two features of `(Value, ClassLabel)` type
Note that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR | 17 | text: Insert task templates for text classification
This PR inserts text-classification templates for datasets with the following properties:
* Only one config
* At most two features of `(Value, ClassLabel)` type
Note that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR
Let's ignore the CI fails since they are unrelated to your changes. They're about dataset cards issues |
https://github.com/huggingface/datasets/pull/2384 | Add args description to DatasetInfo | Thanks for the suggestions! I've included them and made a few minor tweaks along the way | Closes #2354
I am not sure what `post_processed` and `post_processing_size` correspond to, so have left them empty for now. I also took a guess at some of the other fields like `dataset_size` vs `size_in_bytes`, so might have misunderstood their meaning. | 16 | text: Add args description to DatasetInfo
Closes #2354
I am not sure what `post_processed` and `post_processing_size` correspond to, so have left them empty for now. I also took a guess at some of the other fields like `dataset_size` vs `size_in_bytes`, so might have misunderstood their meaning.
Thanks for the suggestions! I've included them and made a few minor tweaks along the way |
https://github.com/huggingface/datasets/pull/2384 | Add args description to DatasetInfo | Please merge master into this branch to fix the CI, I just fixed metadata validation tests. | Closes #2354
I am not sure what `post_processed` and `post_processing_size` correspond to, so have left them empty for now. I also took a guess at some of the other fields like `dataset_size` vs `size_in_bytes`, so might have misunderstood their meaning. | 16 | text: Add args description to DatasetInfo
Closes #2354
I am not sure what `post_processed` and `post_processing_size` correspond to, so have left them empty for now. I also took a guess at some of the other fields like `dataset_size` vs `size_in_bytes`, so might have misunderstood their meaning.
Please merge master into this branch to fix the CI, I just fixed metadata validation tests. |
https://github.com/huggingface/datasets/pull/2374 | add `desc` to `tqdm` in `Dataset.map()` | Once this is merged, let's update `transformers` examples to use this new code. As currently all those tqdm bars are who knows what they are....
https://github.com/huggingface/transformers/issues/11797 | Fixes #2330. Please let me know if anything is also required in this | 26 | text: add `desc` to `tqdm` in `Dataset.map()`
Fixes #2330. Please let me know if anything is also required in this
Once this is merged, let's update `transformers` examples to use this new code. As currently all those tqdm bars are who knows what they are....
https://github.com/huggingface/transformers/issues/11797 |
https://github.com/huggingface/datasets/pull/2374 | add `desc` to `tqdm` in `Dataset.map()` | Sure @stas00! Once this is merged let's discuss what all changes can be done on `transformers` side | Fixes #2330. Please let me know if anything is also required in this | 17 | text: add `desc` to `tqdm` in `Dataset.map()`
Fixes #2330. Please let me know if anything is also required in this
Sure @stas00! Once this is merged let's discuss what all changes can be done on `transformers` side |
https://github.com/huggingface/datasets/pull/2374 | add `desc` to `tqdm` in `Dataset.map()` | Definitely @stas00. From what I could gather, you guys want more meaningful `.map` calls for all examples [here](https://github.com/huggingface/transformers/tree/master/examples/pytorch)? | Fixes #2330. Please let me know if anything is also required in this | 18 | text: add `desc` to `tqdm` in `Dataset.map()`
Fixes #2330. Please let me know if anything is also required in this
Definitely @stas00. From what I could gather, you guys want more meaningful `.map` calls for all examples [here](https://github.com/huggingface/transformers/tree/master/examples/pytorch)? |
https://github.com/huggingface/datasets/pull/2374 | add `desc` to `tqdm` in `Dataset.map()` | That's exactly right, @bhavitvyamalik
Perhaps the best approach is to do one example, see that other maintainers agree on it. and then replicate to other. | Fixes #2330. Please let me know if anything is also required in this | 25 | text: add `desc` to `tqdm` in `Dataset.map()`
Fixes #2330. Please let me know if anything is also required in this
That's exactly right, @bhavitvyamalik
Perhaps the best approach is to do one example, see that other maintainers agree on it. and then replicate to other. |
https://github.com/huggingface/datasets/pull/2372 | ConvQuestions benchmark added | Thanks for your helpful comments and suggestions! :)
I integrated the additional fields, and extended some of the README/dataset card.
And I actually realized that we had the cc-by-4.0 for the dataset, so this was also changed. | Hello,
I would like to integrate our dataset on conversational QA. The answers are grounded in the KG.
The work was published in CIKM 2019 (https://dl.acm.org/doi/10.1145/3357384.3358016).
We hope for further research on how to deal with the challenges of factoid conversational QA.
Thanks! :) | 37 | text: ConvQuestions benchmark added
Hello,
I would like to integrate our dataset on conversational QA. The answers are grounded in the KG.
The work was published in CIKM 2019 (https://dl.acm.org/doi/10.1145/3357384.3358016).
We hope for further research on how to deal with the challenges of factoid conversational QA.
Thanks! :)
Thanks for your helpful comments and suggestions! :)
I integrated the additional fields, and extended some of the README/dataset card.
And I actually realized that we had the cc-by-4.0 for the dataset, so this was also changed. |
https://github.com/huggingface/datasets/pull/2370 | Adding HendrycksTest dataset | @lhoestq Thank you for the review. I've made the suggested changes. There still might be some problems with dummy data though due to some csv loading issues (which I haven't found the cause to). | Adding Hendrycks test from https://arxiv.org/abs/2009.03300.
I'm having a bit of trouble with dummy data creation because some lines in the csv files aren't being loaded properly (only the first entry loaded in a row of length 6). The dataset is loading just fine. Hope you can kindly help!
Thank you! | 34 | text: Adding HendrycksTest dataset
Adding Hendrycks test from https://arxiv.org/abs/2009.03300.
I'm having a bit of trouble with dummy data creation because some lines in the csv files aren't being loaded properly (only the first entry loaded in a row of length 6). The dataset is loading just fine. Hope you can kindly help!
Thank you!
@lhoestq Thank you for the review. I've made the suggested changes. There still might be some problems with dummy data though due to some csv loading issues (which I haven't found the cause to). |
https://github.com/huggingface/datasets/pull/2370 | Adding HendrycksTest dataset | I took a look at the dummy data and some csv lines were cropped. I fixed them :) | Adding Hendrycks test from https://arxiv.org/abs/2009.03300.
I'm having a bit of trouble with dummy data creation because some lines in the csv files aren't being loaded properly (only the first entry loaded in a row of length 6). The dataset is loading just fine. Hope you can kindly help!
Thank you! | 18 | text: Adding HendrycksTest dataset
Adding Hendrycks test from https://arxiv.org/abs/2009.03300.
I'm having a bit of trouble with dummy data creation because some lines in the csv files aren't being loaded properly (only the first entry loaded in a row of length 6). The dataset is loading just fine. Hope you can kindly help!
Thank you!
I took a look at the dummy data and some csv lines were cropped. I fixed them :) |
https://github.com/huggingface/datasets/pull/2364 | README updated for SNLI, MNLI | Regarding the license issue, I think we should allow it since it starts with `other-`. Cc @gchhablani what do you think ? | Closes #2275. Mentioned about -1 labels in MNLI, SNLI and how they should be removed before training. @lhoestq `check_code_quality` test might fail for MNLI as the license name `other-Open Portion of the American National Corpus` is not a registered tag for 'licenses' | 22 | text: README updated for SNLI, MNLI
Closes #2275. Mentioned about -1 labels in MNLI, SNLI and how they should be removed before training. @lhoestq `check_code_quality` test might fail for MNLI as the license name `other-Open Portion of the American National Corpus` is not a registered tag for 'licenses'
Regarding the license issue, I think we should allow it since it starts with `other-`. Cc @gchhablani what do you think ? |
https://github.com/huggingface/datasets/pull/2362 | Fix web_nlg metadata | Hi ! `release_v2.1` and the others are dataset configuration names.
The configuration names are used to show the right code snippet in the UI to load the dataset.
For example if the parsing of the web_nlg tags worked correctly we would have:
![image](https://user-images.githubusercontent.com/42851186/118475444-8d1e5d00-b70c-11eb-98e9-844d4daf6139.png)
Therefore I don't think it's a good idea to rename the configurations from `release_v2.1` to `release_v2_1` as the code snippet would be wrong in this case.
Moreover we can't really disallow dots in configuration names and rename the configurations since it would be a big breaking change. It's commonly used, especially with multilingual datasets. For example `load_dataset("indic_glue", "sna.bn")`.
Is this something that can be fixed on the moonlanding side instead ? | Our metadata storage system does not support `.` inside keys. cc @Pierrci
| 114 | text: Fix web_nlg metadata
Our metadata storage system does not support `.` inside keys. cc @Pierrci
Hi ! `release_v2.1` and the others are dataset configuration names.
The configuration names are used to show the right code snippet in the UI to load the dataset.
For example if the parsing of the web_nlg tags worked correctly we would have:
![image](https://user-images.githubusercontent.com/42851186/118475444-8d1e5d00-b70c-11eb-98e9-844d4daf6139.png)
Therefore I don't think it's a good idea to rename the configurations from `release_v2.1` to `release_v2_1` as the code snippet would be wrong in this case.
Moreover we can't really disallow dots in configuration names and rename the configurations since it would be a big breaking change. It's commonly used, especially with multilingual datasets. For example `load_dataset("indic_glue", "sna.bn")`.
Is this something that can be fixed on the moonlanding side instead ? |
https://github.com/huggingface/datasets/pull/2362 | Fix web_nlg metadata | > Is this something that can be fixed on the moonlanding side instead ?
Not really unless we change database:)
We'll maybe try to find another workaround, but super low-prio given that it's the only dataset that has those dotted keys in the YAML metadata | Our metadata storage system does not support `.` inside keys. cc @Pierrci
| 45 | text: Fix web_nlg metadata
Our metadata storage system does not support `.` inside keys. cc @Pierrci
> Is this something that can be fixed on the moonlanding side instead ?
Not really unless we change database:)
We'll maybe try to find another workaround, but super low-prio given that it's the only dataset that has those dotted keys in the YAML metadata |
https://github.com/huggingface/datasets/pull/2361 | Preserve dtype for numpy/torch/tf/jax arrays | Hi @lhoestq,
It turns out that pyarrow `ListArray` are not recognized as list-like when we get output from `numpy_to_pyarrow_listarray`. This might cause tests to fail. If possible can we convert that `ListArray` output to list inorder for tests to pass? Under the hood it'll maintain the dtype as that of numpy array passed during input only | Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array. | 56 | text: Preserve dtype for numpy/torch/tf/jax arrays
Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array.
Hi @lhoestq,
It turns out that pyarrow `ListArray` are not recognized as list-like when we get output from `numpy_to_pyarrow_listarray`. This might cause tests to fail. If possible can we convert that `ListArray` output to list inorder for tests to pass? Under the hood it'll maintain the dtype as that of numpy array passed during input only |
https://github.com/huggingface/datasets/pull/2361 | Preserve dtype for numpy/torch/tf/jax arrays | Brought down the failing tests from 7 to 4. Let me know if that part looks good. Failing tests are looking quite similar. In `test_map_torch` https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1039 and `test_map_tf`https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1056
they're expecting `float64`. Shouldn't that be `float32` now? | Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array. | 36 | text: Preserve dtype for numpy/torch/tf/jax arrays
Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array.
Brought down the failing tests from 7 to 4. Let me know if that part looks good. Failing tests are looking quite similar. In `test_map_torch` https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1039 and `test_map_tf`https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1056
they're expecting `float64`. Shouldn't that be `float32` now? |
https://github.com/huggingface/datasets/pull/2361 | Preserve dtype for numpy/torch/tf/jax arrays | It's normal: pytorch and tensorflow use `float32` by default, unlike numpy which uses `float64`.
I think that we should always keep the precision of the original tensor (torch/tf/numpy).
It means that as it is in this PR it's fine (the precision is conserved when doing the torch/tf -> numpy conversion).
This is a breaking change but in my opinion the fact that we had Value("float64") for torch.float32 tensors was an issue already.
Let me know what you think. Cc @albertvillanova if you have an opinion on this
If we agree on doing this breaking change, we can just change the test. | Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array. | 101 | text: Preserve dtype for numpy/torch/tf/jax arrays
Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array.
It's normal: pytorch and tensorflow use `float32` by default, unlike numpy which uses `float64`.
I think that we should always keep the precision of the original tensor (torch/tf/numpy).
It means that as it is in this PR it's fine (the precision is conserved when doing the torch/tf -> numpy conversion).
This is a breaking change but in my opinion the fact that we had Value("float64") for torch.float32 tensors was an issue already.
Let me know what you think. Cc @albertvillanova if you have an opinion on this
If we agree on doing this breaking change, we can just change the test. |
https://github.com/huggingface/datasets/pull/2361 | Preserve dtype for numpy/torch/tf/jax arrays | Hi @lhoestq,
Merged master into this branch. Only changing the test is left for now (mentioned below) after which all tests should pass.
> Brought down the failing tests from 7 to 4. Let me know if that part looks good. Failing tests are looking quite similar. In `test_map_torch`
>
> https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1039
>
> and `test_map_tf`
> https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1056
>
>
> they're expecting `float64`. Shouldn't that be `float32` now?
| Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array. | 69 | text: Preserve dtype for numpy/torch/tf/jax arrays
Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array.
Hi @lhoestq,
Merged master into this branch. Only changing the test is left for now (mentioned below) after which all tests should pass.
> Brought down the failing tests from 7 to 4. Let me know if that part looks good. Failing tests are looking quite similar. In `test_map_torch`
>
> https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1039
>
> and `test_map_tf`
> https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1056
>
>
> they're expecting `float64`. Shouldn't that be `float32` now?
|
https://github.com/huggingface/datasets/pull/2361 | Preserve dtype for numpy/torch/tf/jax arrays | > they're expecting float64. Shouldn't that be float32 now?
Yes feel free to update those tests :)
It would be nice to have the same test for JAX as well | Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array. | 30 | text: Preserve dtype for numpy/torch/tf/jax arrays
Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array.
> they're expecting float64. Shouldn't that be float32 now?
Yes feel free to update those tests :)
It would be nice to have the same test for JAX as well |
https://github.com/huggingface/datasets/pull/2361 | Preserve dtype for numpy/torch/tf/jax arrays | Added same test for for JAX too. Also, I saw that I missed changing `test_cast_to_python_objects_jax` like I did for TF and PyTorch. Finished that as well | Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array. | 26 | text: Preserve dtype for numpy/torch/tf/jax arrays
Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array.
Added same test for for JAX too. Also, I saw that I missed changing `test_cast_to_python_objects_jax` like I did for TF and PyTorch. Finished that as well |
https://github.com/huggingface/datasets/pull/2358 | Roman Urdu Stopwords List | Hi ! Thanks for sharing :)
I think the best place to share this is probably the `Languages at Hugging Face` section of the forum:
https://discuss.huggingface.co/c/languages-at-hugging-face/15
Since this is not a dataset, I'm closing this PR if you don't mind | A list of most frequently used Roman Urdu words with different spellings and usages.
This is a very basic effort to collect some basic stopwords for Roman Urdu to help efforts of analyzing text data in roman Urdu which makes up a huge part of daily internet interaction of Roman-Urdu users. | 40 | text: Roman Urdu Stopwords List
A list of most frequently used Roman Urdu words with different spellings and usages.
This is a very basic effort to collect some basic stopwords for Roman Urdu to help efforts of analyzing text data in roman Urdu which makes up a huge part of daily internet interaction of Roman-Urdu users.
Hi ! Thanks for sharing :)
I think the best place to share this is probably the `Languages at Hugging Face` section of the forum:
https://discuss.huggingface.co/c/languages-at-hugging-face/15
Since this is not a dataset, I'm closing this PR if you don't mind |
https://github.com/huggingface/datasets/pull/2358 | Roman Urdu Stopwords List | Thank you I will look into the link that you have shared with me.
On Mon, May 17, 2021 at 7:05 PM Quentin Lhoest ***@***.***>
wrote:
> Closed #2358 <https://github.com/huggingface/datasets/pull/2358>.
>
> β
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/datasets/pull/2358#event-4754836267>, or
> unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AN7SJYJVY4C5XQRDNET743DTOEPC7ANCNFSM443AZ3MA>
> .
>
| A list of most frequently used Roman Urdu words with different spellings and usages.
This is a very basic effort to collect some basic stopwords for Roman Urdu to help efforts of analyzing text data in roman Urdu which makes up a huge part of daily internet interaction of Roman-Urdu users. | 63 | text: Roman Urdu Stopwords List
A list of most frequently used Roman Urdu words with different spellings and usages.
This is a very basic effort to collect some basic stopwords for Roman Urdu to help efforts of analyzing text data in roman Urdu which makes up a huge part of daily internet interaction of Roman-Urdu users.
Thank you I will look into the link that you have shared with me.
On Mon, May 17, 2021 at 7:05 PM Quentin Lhoest ***@***.***>
wrote:
> Closed #2358 <https://github.com/huggingface/datasets/pull/2358>.
>
> β
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/datasets/pull/2358#event-4754836267>, or
> unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AN7SJYJVY4C5XQRDNET743DTOEPC7ANCNFSM443AZ3MA>
> .
>
|
https://github.com/huggingface/datasets/pull/2357 | Adding Microsoft CodeXGlue Datasets | Oh one other thing. Mentioned in the PR was that I would need to regenerate the dataset_infos.json once the camel casing was done. However, I am unsure why this is the case since there is no reference to any object names in the dataset_infos.json file.
If it needs to be reran, I can try it do it on my own machine, but I've had a memory issues with a previous dataset due to my compute constraints so I'd prefer to hopefully avoid it all together if not necessary to regenerate. | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | 90 | text: Adding Microsoft CodeXGlue Datasets
Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq.
Oh one other thing. Mentioned in the PR was that I would need to regenerate the dataset_infos.json once the camel casing was done. However, I am unsure why this is the case since there is no reference to any object names in the dataset_infos.json file.
If it needs to be reran, I can try it do it on my own machine, but I've had a memory issues with a previous dataset due to my compute constraints so I'd prefer to hopefully avoid it all together if not necessary to regenerate. |
https://github.com/huggingface/datasets/pull/2357 | Adding Microsoft CodeXGlue Datasets | Was just reviewing the `builder_name`s of each dataset and it seems like it is already following this format:
`CodeXGlueCcCloneDetectionBigCloneBenchMain -> code_x_glue_cc_clone_detection_big_clone_bench_main` Is there a location I am missing? | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | 28 | text: Adding Microsoft CodeXGlue Datasets
Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq.
Was just reviewing the `builder_name`s of each dataset and it seems like it is already following this format:
`CodeXGlueCcCloneDetectionBigCloneBenchMain -> code_x_glue_cc_clone_detection_big_clone_bench_main` Is there a location I am missing? |
https://github.com/huggingface/datasets/pull/2357 | Adding Microsoft CodeXGlue Datasets | > Was just reviewing the `builder_name`s of each dataset and it seems like it is already following this format:
>
> `CodeXGlueCcCloneDetectionBigCloneBenchMain -> code_x_glue_cc_clone_detection_big_clone_bench_main` Is there a location I am missing?
If it's already in this format then it's fine thanks ! It's all good then
To fix the CI you just need to add the `encoding=` parameters to the `open()` calls | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | 62 | text: Adding Microsoft CodeXGlue Datasets
Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq.
> Was just reviewing the `builder_name`s of each dataset and it seems like it is already following this format:
>
> `CodeXGlueCcCloneDetectionBigCloneBenchMain -> code_x_glue_cc_clone_detection_big_clone_bench_main` Is there a location I am missing?
If it's already in this format then it's fine thanks ! It's all good then
To fix the CI you just need to add the `encoding=` parameters to the `open()` calls |
https://github.com/huggingface/datasets/pull/2357 | Adding Microsoft CodeXGlue Datasets | @lhoestq I think everything should be good to go besides the code styling, which seem to be due to missing or unsupported metadata tags for the READMEs, is this something I should worry about since all the other datasets seem to be failing as well? | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | 45 | text: Adding Microsoft CodeXGlue Datasets
Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq.
@lhoestq I think everything should be good to go besides the code styling, which seem to be due to missing or unsupported metadata tags for the READMEs, is this something I should worry about since all the other datasets seem to be failing as well? |
https://github.com/huggingface/datasets/pull/2357 | Adding Microsoft CodeXGlue Datasets | Awesome! Just committed your changes and I will begin on adding the TOCs and filling in the content for the new sections/subsections.
Also, I see that we are having to only use the `code` tag instead of individual langs and I get that is required for indexing or showing available tags on the datasets hub. However, as a future feature, it might be good to add tags for individual programming languages to make it easier to search. | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | 77 | text: Adding Microsoft CodeXGlue Datasets
Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq.
Awesome! Just committed your changes and I will begin on adding the TOCs and filling in the content for the new sections/subsections.
Also, I see that we are having to only use the `code` tag instead of individual langs and I get that is required for indexing or showing available tags on the datasets hub. However, as a future feature, it might be good to add tags for individual programming languages to make it easier to search. |
https://github.com/huggingface/datasets/pull/2357 | Adding Microsoft CodeXGlue Datasets | > Also, I see that we are having to only use the code tag instead of individual langs and I get that is required for indexing or showing available tags on the datasets hub. However, as a future feature, it might be good to add tags for individual programming languages to make it easier to search.
Yes I agree. We'll be able to reuse the tags per programming language from this PR when we allow this feature
cc @yjernite what do you think about extending our languages taxonomy to programming languages ? | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | 92 | text: Adding Microsoft CodeXGlue Datasets
Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq.
> Also, I see that we are having to only use the code tag instead of individual langs and I get that is required for indexing or showing available tags on the datasets hub. However, as a future feature, it might be good to add tags for individual programming languages to make it easier to search.
Yes I agree. We'll be able to reuse the tags per programming language from this PR when we allow this feature
cc @yjernite what do you think about extending our languages taxonomy to programming languages ? |
https://github.com/huggingface/datasets/pull/2357 | Adding Microsoft CodeXGlue Datasets | Hey @lhoestq, just finalizing the READMEs and testing them against the automated test. For the non, WIN tests, it seems like there is some dependency issue that doesn't have to do with the new datasets. For the WIN tests, it looks like some of the headings are mislabeled such as "Supported Tasks and Leaderboards" -> "Supported Tasks" in the TOC you posted. Should I base my TOC on the one you posted or on the one that the test script is using? Also, it throws errors for some of the fields being empty, such as "Source Data" in the `code_x_glue_tt_text_to_text` dataset. However, I am not familiar with this dataset, so I put the `[More Information Needed]` stub, similar to the other sections I couldn't easily answer. For some of the sections like "Source Data", is this info required? | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | 138 | text: Adding Microsoft CodeXGlue Datasets
Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq.
Hey @lhoestq, just finalizing the READMEs and testing them against the automated test. For the non, WIN tests, it seems like there is some dependency issue that doesn't have to do with the new datasets. For the WIN tests, it looks like some of the headings are mislabeled such as "Supported Tasks and Leaderboards" -> "Supported Tasks" in the TOC you posted. Should I base my TOC on the one you posted or on the one that the test script is using? Also, it throws errors for some of the fields being empty, such as "Source Data" in the `code_x_glue_tt_text_to_text` dataset. However, I am not familiar with this dataset, so I put the `[More Information Needed]` stub, similar to the other sections I couldn't easily answer. For some of the sections like "Source Data", is this info required? |
https://github.com/huggingface/datasets/pull/2357 | Adding Microsoft CodeXGlue Datasets | Yes you're right, it is `Supported Tasks and Leaderboards` that we need to use, sorry about that
I also noticed the same for the splits section: we have to use `Data Splits` (not Data Splits Sample Size)
| Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | 37 | text: Adding Microsoft CodeXGlue Datasets
Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq.
Yes you're right, it is `Supported Tasks and Leaderboards` that we need to use, sorry about that
I also noticed the same for the splits section: we have to use `Data Splits` (not Data Splits Sample Size)
|
https://github.com/huggingface/datasets/pull/2357 | Adding Microsoft CodeXGlue Datasets | Some subsections are also missing: `Initial Data Collection and Normalization`, `Who are the source language producers?`.
If you are interested you can fill those sections as well, or leave them empty for now.
This will also fix the error regarding "Source Data"
You can see the template of the readme here:
https://github.com/huggingface/datasets/blob/9d8bf36fdb861d9b2922d7c782fb58f9f542997c/templates/README.md | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | 52 | text: Adding Microsoft CodeXGlue Datasets
Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq.
Some subsections are also missing: `Initial Data Collection and Normalization`, `Who are the source language producers?`.
If you are interested you can fill those sections as well, or leave them empty for now.
This will also fix the error regarding "Source Data"
You can see the template of the readme here:
https://github.com/huggingface/datasets/blob/9d8bf36fdb861d9b2922d7c782fb58f9f542997c/templates/README.md |
https://github.com/huggingface/datasets/pull/2357 | Adding Microsoft CodeXGlue Datasets | > > Also, I see that we are having to only use the code tag instead of individual langs and I get that is required for indexing or showing available tags on the datasets hub. However, as a future feature, it might be good to add tags for individual programming languages to make it easier to search.
>
> Yes I agree. We'll be able to reuse the tags per programming language from this PR when we allow this feature
>
> cc @yjernite what do you think about extending our languages taxonomy to programming languages ?
Sounds good, as long as they all share a prefix! maybe `code_cpp`, `code_java`, etc. ?
I don't think we currently have `_` in language codes/names, but also don't see what it would break *a priori* | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | 132 | text: Adding Microsoft CodeXGlue Datasets
Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq.
> > Also, I see that we are having to only use the code tag instead of individual langs and I get that is required for indexing or showing available tags on the datasets hub. However, as a future feature, it might be good to add tags for individual programming languages to make it easier to search.
>
> Yes I agree. We'll be able to reuse the tags per programming language from this PR when we allow this feature
>
> cc @yjernite what do you think about extending our languages taxonomy to programming languages ?
Sounds good, as long as they all share a prefix! maybe `code_cpp`, `code_java`, etc. ?
I don't think we currently have `_` in language codes/names, but also don't see what it would break *a priori* |
https://github.com/huggingface/datasets/pull/2357 | Adding Microsoft CodeXGlue Datasets | We don't use `_` but there are some languages that use `-` though like `en-US`. Let's use `-` maybe, to match the same hierarchy pattern ? | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | 26 | text: Adding Microsoft CodeXGlue Datasets
Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq.
We don't use `_` but there are some languages that use `-` though like `en-US`. Let's use `-` maybe, to match the same hierarchy pattern ? |
https://github.com/huggingface/datasets/pull/2357 | Adding Microsoft CodeXGlue Datasets | Hi guys, I just started working on https://github.com/huggingface/datasets/pull/997 this morning and I just realized that you were finishing it... You may want to get the dataset cards from https://github.com/madlag/datasets, and maybe some code too, as I did a few things like moving _CITATION and _DESCRIPTION to globals.
| Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | 47 | text: Adding Microsoft CodeXGlue Datasets
Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq.
Hi guys, I just started working on https://github.com/huggingface/datasets/pull/997 this morning and I just realized that you were finishing it... You may want to get the dataset cards from https://github.com/madlag/datasets, and maybe some code too, as I did a few things like moving _CITATION and _DESCRIPTION to globals.
|
https://github.com/huggingface/datasets/pull/2357 | Adding Microsoft CodeXGlue Datasets | I am renaming the main classes to match the dataset names, for example : CodeXGlueTcTextToCodeMain -> CodeXGlueTcTextToCode . And I am regenerating the dataset_infos.json accordingly. | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | 25 | text: Adding Microsoft CodeXGlue Datasets
Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq.
I am renaming the main classes to match the dataset names, for example : CodeXGlueTcTextToCodeMain -> CodeXGlueTcTextToCode . And I am regenerating the dataset_infos.json accordingly. |
https://github.com/huggingface/datasets/pull/2357 | Adding Microsoft CodeXGlue Datasets | Thanks for renaming the classes and updating the dataset_infos.json ! This looks all clean now :)
This PR looks all good to me :) One just needs to merge master into this branch to make sure the CI is green with the latest changes. It should also fix the current CI issues that are not related to this PR | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | 59 | text: Adding Microsoft CodeXGlue Datasets
Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq.
Thanks for renaming the classes and updating the dataset_infos.json ! This looks all clean now :)
This PR looks all good to me :) One just needs to merge master into this branch to make sure the CI is green with the latest changes. It should also fix the current CI issues that are not related to this PR |
https://github.com/huggingface/datasets/pull/2357 | Adding Microsoft CodeXGlue Datasets | Woot woot :rocket:! All green, looks like it is ready for showtime. Thank you both @lhoestq and especially @madlag, I think these datasets are going to be a great new addition to :hugs: datasets and I can't wait to use them in my research :nerd_face:. | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | 45 | text: Adding Microsoft CodeXGlue Datasets
Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq.
Woot woot :rocket:! All green, looks like it is ready for showtime. Thank you both @lhoestq and especially @madlag, I think these datasets are going to be a great new addition to :hugs: datasets and I can't wait to use them in my research :nerd_face:. |
https://github.com/huggingface/datasets/pull/2357 | Adding Microsoft CodeXGlue Datasets | Thanks @ncoop57 for you contribution! It will be really cool to see those datasets used as soon as they are released ! | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | 22 | text: Adding Microsoft CodeXGlue Datasets
Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq.
Thanks @ncoop57 for you contribution! It will be really cool to see those datasets used as soon as they are released ! |
https://github.com/huggingface/datasets/pull/2355 | normalized TOCs and titles in data cards | Oh right! I'd be in favor of still having the same TOC across the board, we can either leave it as is or add a `[More Info Needed]` `Contributions` Section wherever it's currently missing, wdyt? | I started fixing some of the READMEs that were failing the tests introduced by @gchhablani but then realized that there were some consistent differences between earlier and newer versions of some of the titles (e.g. Data Splits vs Data Splits Sample Size, Supported Tasks vs Supported Tasks and Leaderboards). We also had different versions of the Table of Content
This PR normalizes all of them to the newer version | 35 | text: normalized TOCs and titles in data cards
I started fixing some of the READMEs that were failing the tests introduced by @gchhablani but then realized that there were some consistent differences between earlier and newer versions of some of the titles (e.g. Data Splits vs Data Splits Sample Size, Supported Tasks vs Supported Tasks and Leaderboards). We also had different versions of the Table of Content
This PR normalizes all of them to the newer version
Oh right! I'd be in favor of still having the same TOC across the board, we can either leave it as is or add a `[More Info Needed]` `Contributions` Section wherever it's currently missing, wdyt? |
https://github.com/huggingface/datasets/pull/2355 | normalized TOCs and titles in data cards | Merging for now to avoid conflict since there are so many changes but let's figure out the contributions section next ;) | I started fixing some of the READMEs that were failing the tests introduced by @gchhablani but then realized that there were some consistent differences between earlier and newer versions of some of the titles (e.g. Data Splits vs Data Splits Sample Size, Supported Tasks vs Supported Tasks and Leaderboards). We also had different versions of the Table of Content
This PR normalizes all of them to the newer version | 21 | text: normalized TOCs and titles in data cards
I started fixing some of the READMEs that were failing the tests introduced by @gchhablani but then realized that there were some consistent differences between earlier and newer versions of some of the titles (e.g. Data Splits vs Data Splits Sample Size, Supported Tasks vs Supported Tasks and Leaderboards). We also had different versions of the Table of Content
This PR normalizes all of them to the newer version
Merging for now to avoid conflict since there are so many changes but let's figure out the contributions section next ;) |
https://github.com/huggingface/datasets/pull/2352 | Set to_json default to JSON lines | This is perfect, @albertvillanova - thank you! Tested it to work.
Might it be a good idea to document the args to `to_json`?
and also even a very basic progress bar? took 10min for 8M large records for `openwebtext` so perhaps some indication of it's being alive every min or so? | With this PR, the method `Dataset.to_json`:
- is added to the docs
- defaults to JSON lines | 51 | text: Set to_json default to JSON lines
With this PR, the method `Dataset.to_json`:
- is added to the docs
- defaults to JSON lines
This is perfect, @albertvillanova - thank you! Tested it to work.
Might it be a good idea to document the args to `to_json`?
and also even a very basic progress bar? took 10min for 8M large records for `openwebtext` so perhaps some indication of it's being alive every min or so? |
https://github.com/huggingface/datasets/pull/2348 | Add tests for dataset cards | @lhoestq
Should I remove the scripts? or atleast remove running them from the CircleCI config?
Also, I hope it is okay that the combined method (metadata+content) is only a slow test, and for the Circle CI, I assume only non-slow tests are run? If yes, this would mean separate tests for content and metadata. | Adding tests for dataset cards
This PR will potentially remove the scripts being used for dataset tags and readme validation.
Additionally, this will allow testing dataset readmes by providing the name as follows:
```bash
pytest tests/test_dataset_cards.py::test_dataset_tags[fashion_mnist]
```
and
```bash
pytest tests/test_dataset_cards.py::test_readme_content[fashion_mnist]
```
or a combined test as:
```bash
pytest tests/test_dataset_cards.py::test_dataset_card[fashion_mnist]
```
@lhoestq | 54 | text: Add tests for dataset cards
Adding tests for dataset cards
This PR will potentially remove the scripts being used for dataset tags and readme validation.
Additionally, this will allow testing dataset readmes by providing the name as follows:
```bash
pytest tests/test_dataset_cards.py::test_dataset_tags[fashion_mnist]
```
and
```bash
pytest tests/test_dataset_cards.py::test_readme_content[fashion_mnist]
```
or a combined test as:
```bash
pytest tests/test_dataset_cards.py::test_dataset_card[fashion_mnist]
```
@lhoestq
@lhoestq
Should I remove the scripts? or atleast remove running them from the CircleCI config?
Also, I hope it is okay that the combined method (metadata+content) is only a slow test, and for the Circle CI, I assume only non-slow tests are run? If yes, this would mean separate tests for content and metadata. |
https://github.com/huggingface/datasets/pull/2348 | Add tests for dataset cards | Also feel free to remove the scripts from the CI and also remove the scripts files :) | Adding tests for dataset cards
This PR will potentially remove the scripts being used for dataset tags and readme validation.
Additionally, this will allow testing dataset readmes by providing the name as follows:
```bash
pytest tests/test_dataset_cards.py::test_dataset_tags[fashion_mnist]
```
and
```bash
pytest tests/test_dataset_cards.py::test_readme_content[fashion_mnist]
```
or a combined test as:
```bash
pytest tests/test_dataset_cards.py::test_dataset_card[fashion_mnist]
```
@lhoestq | 17 | text: Add tests for dataset cards
Adding tests for dataset cards
This PR will potentially remove the scripts being used for dataset tags and readme validation.
Additionally, this will allow testing dataset readmes by providing the name as follows:
```bash
pytest tests/test_dataset_cards.py::test_dataset_tags[fashion_mnist]
```
and
```bash
pytest tests/test_dataset_cards.py::test_readme_content[fashion_mnist]
```
or a combined test as:
```bash
pytest tests/test_dataset_cards.py::test_dataset_card[fashion_mnist]
```
@lhoestq
Also feel free to remove the scripts from the CI and also remove the scripts files :) |
https://github.com/huggingface/datasets/pull/2346 | Add Qasper Dataset | I saw that the README [template](https://github.com/huggingface/datasets/blob/master/templates/README.md) changed while I was working on this π
Some TOC titles may be different but I filled it to the best of my knowledge & readme quality check passes now.
ready for review @lhoestq | [Question Answering on Scientific Research Papers](https://allenai.org/project/qasper/home)
Doing NLP on NLP papers to do NLP β»οΈ I had to add it~
- [x] Add README (just gotta fill out some more )
- [x] Dataloader code
- [x] Make dummy dataset
- [x] generate dataset infos
- [x] Tests
| 40 | text: Add Qasper Dataset
[Question Answering on Scientific Research Papers](https://allenai.org/project/qasper/home)
Doing NLP on NLP papers to do NLP β»οΈ I had to add it~
- [x] Add README (just gotta fill out some more )
- [x] Dataloader code
- [x] Make dummy dataset
- [x] generate dataset infos
- [x] Tests
I saw that the README [template](https://github.com/huggingface/datasets/blob/master/templates/README.md) changed while I was working on this π
Some TOC titles may be different but I filled it to the best of my knowledge & readme quality check passes now.
ready for review @lhoestq |
https://github.com/huggingface/datasets/pull/2336 | Fix overflow issue in interpolation search | ~~Seems like the CI failure is unrelated to this PR~~ (fixed with the merge).
@lhoestq Can you please verify that everything is OK in terms of speed? Another solution is to change the offsets array dtype to np.int64 (but this doesn't scale in theory compared to Python integer which is unbound). I'm not sure why on my 64-bit machine the default numpy dtype is np.int32 tho. | Fixes #2335
More info about this error can be found [here](https://stackoverflow.com/questions/53239890/why-do-i-keep-getting-this-error-runtimewarning-overflow-encountered-in-int-sc/53240100). | 66 | text: Fix overflow issue in interpolation search
Fixes #2335
More info about this error can be found [here](https://stackoverflow.com/questions/53239890/why-do-i-keep-getting-this-error-runtimewarning-overflow-encountered-in-int-sc/53240100).
~~Seems like the CI failure is unrelated to this PR~~ (fixed with the merge).
@lhoestq Can you please verify that everything is OK in terms of speed? Another solution is to change the offsets array dtype to np.int64 (but this doesn't scale in theory compared to Python integer which is unbound). I'm not sure why on my 64-bit machine the default numpy dtype is np.int32 tho. |
https://github.com/huggingface/datasets/pull/2336 | Fix overflow issue in interpolation search | Hi ! Thanks for the fix.
Unfortunately in terms of speed this is not acceptable :/
The `get_batch_of_1024_random_rows` metric or the `benchmark_getitem_100B ` benchmark is almost at 1sec instead of a few milliseconds.
Would it be possible to avoid the overflow by simply passing `dtype=np.int64` to `np.cumsum` ?
On windows machines the default is int32 unfortunately so we have to force the dtype to be int64
| Fixes #2335
More info about this error can be found [here](https://stackoverflow.com/questions/53239890/why-do-i-keep-getting-this-error-runtimewarning-overflow-encountered-in-int-sc/53240100). | 66 | text: Fix overflow issue in interpolation search
Fixes #2335
More info about this error can be found [here](https://stackoverflow.com/questions/53239890/why-do-i-keep-getting-this-error-runtimewarning-overflow-encountered-in-int-sc/53240100).
Hi ! Thanks for the fix.
Unfortunately in terms of speed this is not acceptable :/
The `get_batch_of_1024_random_rows` metric or the `benchmark_getitem_100B ` benchmark is almost at 1sec instead of a few milliseconds.
Would it be possible to avoid the overflow by simply passing `dtype=np.int64` to `np.cumsum` ?
On windows machines the default is int32 unfortunately so we have to force the dtype to be int64
|
https://github.com/huggingface/datasets/pull/2336 | Fix overflow issue in interpolation search | Yes, casting the array to np.int64 should work as well. Another option would be to cast the array elements (`arr[i], arr[j]`) in interpolation search to Python integers (bound only with memory) before multiplication (the error stems from this part: `(j - i) * (x - arr[i])`) when working with big values. But for now, the first option is OK for the sake of simplicity. | Fixes #2335
More info about this error can be found [here](https://stackoverflow.com/questions/53239890/why-do-i-keep-getting-this-error-runtimewarning-overflow-encountered-in-int-sc/53240100). | 64 | text: Fix overflow issue in interpolation search
Fixes #2335
More info about this error can be found [here](https://stackoverflow.com/questions/53239890/why-do-i-keep-getting-this-error-runtimewarning-overflow-encountered-in-int-sc/53240100).
Yes, casting the array to np.int64 should work as well. Another option would be to cast the array elements (`arr[i], arr[j]`) in interpolation search to Python integers (bound only with memory) before multiplication (the error stems from this part: `(j - i) * (x - arr[i])`) when working with big values. But for now, the first option is OK for the sake of simplicity. |
https://github.com/huggingface/datasets/pull/2329 | Add cache dir for in-memory datasets | @lhoestq Sorry for the late reply. Yes, I'll start working on tests. Thanks for the detailed explanation of the current issues with caching (like the idea of adding the `use_caching` parameter to `load_dataset`) | Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq.
Should fix #2322 | 33 | text: Add cache dir for in-memory datasets
Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq.
Should fix #2322
@lhoestq Sorry for the late reply. Yes, I'll start working on tests. Thanks for the detailed explanation of the current issues with caching (like the idea of adding the `use_caching` parameter to `load_dataset`) |
https://github.com/huggingface/datasets/pull/2329 | Add cache dir for in-memory datasets | @lhoestq Sure. I'm aware this is a high-priority issue to some extent, so feel free to take over.
Few suggestions I have:
* there is a slight difference between setting `use_caching` to `False` in `load_dataset` and disabling caching globally with `set_caching_enabled(False)` because the former will never execute the following code (`self._cache_dir` is always `False`):
https://github.com/huggingface/datasets/blob/c231abdb174987419bbde3360b5b9d6a4672c736/src/datasets/arrow_dataset.py#L1807-L1824
, so I'm just checking whether this is intended (if yes, maybe the docs should mention this) or not?
* think we should add the `use_caching` parameter to every method that has the `keep_in_memory` (and `in_memory` π) parameter in its signature for better consistency, but I say let's address this in a separate PR. IMO we need one more PR that will deal exclusively with consistency in the caching logic. | Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq.
Should fix #2322 | 125 | text: Add cache dir for in-memory datasets
Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq.
Should fix #2322
@lhoestq Sure. I'm aware this is a high-priority issue to some extent, so feel free to take over.
Few suggestions I have:
* there is a slight difference between setting `use_caching` to `False` in `load_dataset` and disabling caching globally with `set_caching_enabled(False)` because the former will never execute the following code (`self._cache_dir` is always `False`):
https://github.com/huggingface/datasets/blob/c231abdb174987419bbde3360b5b9d6a4672c736/src/datasets/arrow_dataset.py#L1807-L1824
, so I'm just checking whether this is intended (if yes, maybe the docs should mention this) or not?
* think we should add the `use_caching` parameter to every method that has the `keep_in_memory` (and `in_memory` π) parameter in its signature for better consistency, but I say let's address this in a separate PR. IMO we need one more PR that will deal exclusively with consistency in the caching logic. |
https://github.com/huggingface/datasets/pull/2329 | Add cache dir for in-memory datasets | Hi @mariosasko
We discussed internally and we think that this feature might not be the direction we're doing to take for these reasons:
- it goes against our simple definition of caching: on-disk == uses file cache, and in-memory == nothing is written to disk. I think it adds too much complexity just for a minimal flexibility addition
- there are a few edge cases which are really confusing:
- map on an in memory dataset with a cache_file_name specified by the user -> should the result be in memory or from disk ?
- it would require a special cache directory just for in memory datasets, since they donβt have a preferred directory for caching
- it would break a lot of stuff and would require to rewrite significant parts of the core code and the tests
So in the end we're probably going to close this PR.
Let me know what you think, and thank you anyway for your help on this ! | Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq.
Should fix #2322 | 165 | text: Add cache dir for in-memory datasets
Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq.
Should fix #2322
Hi @mariosasko
We discussed internally and we think that this feature might not be the direction we're doing to take for these reasons:
- it goes against our simple definition of caching: on-disk == uses file cache, and in-memory == nothing is written to disk. I think it adds too much complexity just for a minimal flexibility addition
- there are a few edge cases which are really confusing:
- map on an in memory dataset with a cache_file_name specified by the user -> should the result be in memory or from disk ?
- it would require a special cache directory just for in memory datasets, since they donβt have a preferred directory for caching
- it would break a lot of stuff and would require to rewrite significant parts of the core code and the tests
So in the end we're probably going to close this PR.
Let me know what you think, and thank you anyway for your help on this ! |