url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.26B
| node_id
stringlengths 18
32
| number
int64 1
4.44k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,654B
| updated_at
int64 1,587B
1,654B
| closed_at
int64 1,587B
1,654B
โ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
โ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 1
value | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2411/comments | https://api.github.com/repos/huggingface/datasets/issues/2411/events | https://github.com/huggingface/datasets/pull/2411 | 903,671,778 | MDExOlB1bGxSZXF1ZXN0NjU0OTAzNjg2 | 2,411 | Add DOI badge to README | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,622,119,007,000 | 1,622,122,974,000 | 1,622,122,974,000 | MEMBER | null | Once published the latest release, the DOI badge has been automatically generated by Zenodo. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2411/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2411",
"html_url": "https://github.com/huggingface/datasets/pull/2411",
"diff_url": "https://github.com/huggingface/datasets/pull/2411.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2411.patch",
"merged_at": 1622122974000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2410/comments | https://api.github.com/repos/huggingface/datasets/issues/2410/events | https://github.com/huggingface/datasets/pull/2410 | 903,613,676 | MDExOlB1bGxSZXF1ZXN0NjU0ODUwMjY4 | 2,410 | fix #2391 add original answers in kilt-TriviaQA | {
"login": "PaulLerner",
"id": 25532159,
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulLerner",
"html_url": "https://github.com/PaulLerner",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"LGTM, but I'm not sure what's going on with the Unix tests @lhoestq ",
"The CI error is unrelated to this PR, it's been fixed now on master.",
"Thanks @PaulLerner !",
"> #- [ ] - Hey![image](https://user-images.githubusercontent.com/71971234/121969638-00030e00-cd75-11eb-9512-25d32ac08051.jpeg)@fr[fr_fr**fr~~fr `fr```\nFR\n````~~**_]()",
"Oh that was unexpected. I didn't know pokemons were into NLP"
] | 1,622,116,469,000 | 1,623,760,557,000 | 1,623,691,750,000 | CONTRIBUTOR | null | cc @yjernite is it ok like this? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2410/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2410",
"html_url": "https://github.com/huggingface/datasets/pull/2410",
"diff_url": "https://github.com/huggingface/datasets/pull/2410.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2410.patch",
"merged_at": 1623691750000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2409/comments | https://api.github.com/repos/huggingface/datasets/issues/2409/events | https://github.com/huggingface/datasets/pull/2409 | 903,441,398 | MDExOlB1bGxSZXF1ZXN0NjU0Njk3NjA0 | 2,409 | Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I thought the renaming was suggested only for the env var, and not for the config variable... As you think is better! ;)",
"I think it's better if they match, so that users understand directly that they're directly connected",
"Well, if you're not concerned about back-compat here, perhaps it could be renamed and shortened too ;)\r\n\r\nI'd suggest one of:\r\n\r\n* `HF_DATASETS_IN_MEMORY_MAX_SIZE`\r\n* `HF_DATASETS_MAX_IN_MEMORY_SIZE`\r\n\r\nthe itention is to:\r\n1. make it consistent with all the other `datasets` env vars which all start with `HF_DATASETS_`, e.g.:\r\n```\r\nHF_DATASETS_CACHE\r\nHF_DATASETS_OFFLINE \r\n```\r\n2. allow to recode in the future to support 1M, 4K, 1T and not just bytes - bytes is not a great choice for this type of variable since it will be at least X Mbytes for most reasonable uses.\r\n\r\nAnd I agree with @albertvillanova that the config variable name shouldn't have the HF prefix - it's preaching to the choir - the user already knows it's a local variable. \r\n\r\nThe only reason we prefix env vars, is because they are used outside of the software.\r\n\r\nBut I do see a good point of you trying to make things consistent too. How about this:\r\n\r\n`config.IN_MEMORY_MAX_SIZE` (or whatever the final env var will be minus `HF_DATASETS_` prefix).\r\n\r\nThis is of course just my opinion.\r\n\r\n",
"Thanks for the comment :)\r\nI like both propositions, and I agree this would be better in order to allow support for 1M, 1T etc. \r\nRegarding the prefix of the variable in config.py I don't have a strong opinion. I just added it for consistency with the other variables that default to the env variables like HF_DATASETS_CACHE. However I agree this would be nice to have shorter names so I'm not against removing the prefix either. Since the feature is relatively new, I think we can still allow ourself to rename it",
"Awesome, \r\n\r\nLet's use then:\r\n\r\n- `HF_DATASETS_IN_MEMORY_MAX_SIZE` for the env var\r\n- `config.IN_MEMORY_MAX_SIZE` for config.\r\n\r\nand for now bytes will be documented as the only option and down the road add support for K/M/G.\r\n\r\n@albertvillanova, does that sound good to you?",
"Great!!! ๐ค ",
"Did I miss a PR with this change?\r\n\r\nI want to make sure to add it to transformers tests to avoid the overheard of rebuilding the datasets.\r\n\r\nThank you!",
"@stas00 I'm taking on this now that I have finally finished the collaborative training experiment. Sorry for the delay.",
"Yes, of course! Thank you for taking care of it, @albertvillanova ",
"Actually, why is this feature on by default? \r\n\r\nUsers are very unlikely to understand what is going on or to know where to look. Should it at the very least emit a warning that this was done w/o asking the user to do so and how to turn it off?\r\n\r\nIMHO, this feature should be enabled explicitly by those who want it and not be On by default. This is an optimization that benefits only select users and is a burden on the rest.\r\n\r\nIn my line of dev/debug work (multiple short runs that have to be very fast) now I have to remember to disable this feature explicitly on every machine I work :(\r\n",
"Having the dataset in memory is nice to get the speed but I agree that the lack of caching for dataset in memory is an issue. By default we always had caching on.\r\nHere the issue is that in-memory datasets are still not able to use the cache - we should fix this asap IMO.\r\n\r\nHere is the PR that fixes this: https://github.com/huggingface/datasets/pull/2329",
"But why do they have to be datasets in memory in the first place? Why not just have the default that all datasets are normal and are cached which seems to be working solidly. And only enable in memory datasets explicitly if the user chooses to and then it doesn't matter if it's cached on not for the majority of the users who will not make this choice.\r\n\r\nI mean the definition of in-memory-datasets is very arbitrary - why 250MB and not 5GB? It's very likely that the user will want to set this threshold based on their RAM availability. So while doing that they can enable the in-memory-datasets. Unless I'm missing something here.\r\n\r\nThe intention here is that things work well in general out of the box, and further performance optimizations are available to those who know what they are doing.\r\n",
"This is just for speed improvements, especially for data exploration/experiments in notebooks. Ideally it shouldn't have changed anything regarding caching behavior in the first place (i.e. have the caching enabled by default).\r\n\r\nThe 250MB limit has also been chosen to not create unexpected high memory usage on small laptops.",
"Won't it be more straight-forward to create a performance optimization doc and share all these optimizations there? That way the user will be in the knowing and will be able to get faster speeds if their RAM is large. \r\n\r\nIt is hard for me to tell the average size of a dataset an average user will have, but my gut feeling is that many NLP datasets are larger than 250MB. Please correct me if I'm wrong.\r\n\r\nBut at the same time what you're saying is that once https://github.com/huggingface/datasets/pull/2329 is completed and merged, the in-memory-datasets will be cached too. So if I wait long enough the whole issue will go away altogether, correct?"
] | 1,622,106,420,000 | 1,623,168,055,000 | 1,622,108,021,000 | MEMBER | null | As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2409/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2409",
"html_url": "https://github.com/huggingface/datasets/pull/2409",
"diff_url": "https://github.com/huggingface/datasets/pull/2409.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2409.patch",
"merged_at": 1622108021000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2408/comments | https://api.github.com/repos/huggingface/datasets/issues/2408/events | https://github.com/huggingface/datasets/pull/2408 | 903,422,648 | MDExOlB1bGxSZXF1ZXN0NjU0NjgxMzE4 | 2,408 | Fix head_qa keys | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,622,105,419,000 | 1,622,106,337,000 | 1,622,106,336,000 | MEMBER | null | There were duplicate in the keys, as mentioned in #2382 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2408/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2408",
"html_url": "https://github.com/huggingface/datasets/pull/2408",
"diff_url": "https://github.com/huggingface/datasets/pull/2408.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2408.patch",
"merged_at": 1622106336000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2407/comments | https://api.github.com/repos/huggingface/datasets/issues/2407/events | https://github.com/huggingface/datasets/issues/2407 | 903,111,755 | MDU6SXNzdWU5MDMxMTE3NTU= | 2,407 | .map() function got an unexpected keyword argument 'cache_file_name' | {
"login": "cindyxinyiwang",
"id": 7390482,
"node_id": "MDQ6VXNlcjczOTA0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7390482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cindyxinyiwang",
"html_url": "https://github.com/cindyxinyiwang",
"followers_url": "https://api.github.com/users/cindyxinyiwang/followers",
"following_url": "https://api.github.com/users/cindyxinyiwang/following{/other_user}",
"gists_url": "https://api.github.com/users/cindyxinyiwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cindyxinyiwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cindyxinyiwang/subscriptions",
"organizations_url": "https://api.github.com/users/cindyxinyiwang/orgs",
"repos_url": "https://api.github.com/users/cindyxinyiwang/repos",
"events_url": "https://api.github.com/users/cindyxinyiwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/cindyxinyiwang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @cindyxinyiwang,\r\nDid you try adding `.arrow` after `cache_file_name` argument? Here I think they're expecting something like that only for a cache file:\r\nhttps://github.com/huggingface/datasets/blob/e08362256fb157c0b3038437fc0d7a0bbb50de5c/src/datasets/arrow_dataset.py#L1556-L1558",
"Hi ! `cache_file_name` is an argument of the `Dataset.map` method. Can you check that your `dataset` is indeed a `Dataset` object ?\r\n\r\nIf you loaded several splits, then it would actually be a `DatasetDict` (one dataset per split, in a dictionary).\r\nIn this case, since there are several datasets in the dict, the `DatasetDict.map` method requires a `cache_file_names` argument (with an 's'), so that you can provide one file name per split.",
"I think you are right. I used cache_file_names={data1: name1, data2: name2} and it works. Thank you!"
] | 1,622,080,466,000 | 1,622,123,200,000 | 1,622,123,200,000 | NONE | null | ## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map() function, it throws an error that ".map() function got an unexpected keyword argument 'cache_file_name'".
I believe I'm using the latest dataset 1.6.2. Also seems like the document and the actual code indicates there is an argument 'cache_file_name' for the .map() function.
Here is the code I use
## Steps to reproduce the bug
```datasets = load_from_disk(dataset_path=my_path)
[...]
def tokenize_function(examples):
return tokenizer(examples[text_column_name])
logger.info("Mapping dataset to tokenized dataset.")
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=True,
cache_file_name="my_tokenized_file"
)
```
## Actual results
tokenized_datasets = datasets.map(
TypeError: map() got an unexpected keyword argument 'cache_file_name'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:1.6.2
- Platform:Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.10
- Python version:3.8.5
- PyArrow version:3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2407/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2406/comments | https://api.github.com/repos/huggingface/datasets/issues/2406/events | https://github.com/huggingface/datasets/issues/2406 | 902,643,844 | MDU6SXNzdWU5MDI2NDM4NDQ= | 2,406 | Add guide on using task templates to documentation | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,622,046,506,000 | 1,622,046,506,000 | null | MEMBER | null | Once we have a stable API on the text classification and question answering task templates, add a guide on how to use them in the documentation.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2406/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2405/comments | https://api.github.com/repos/huggingface/datasets/issues/2405/events | https://github.com/huggingface/datasets/pull/2405 | 901,227,658 | MDExOlB1bGxSZXF1ZXN0NjUyNzA2OTk1 | 2,405 | Add dataset tags | {
"login": "OyvindTafjord",
"id": 6453366,
"node_id": "MDQ6VXNlcjY0NTMzNjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6453366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OyvindTafjord",
"html_url": "https://github.com/OyvindTafjord",
"followers_url": "https://api.github.com/users/OyvindTafjord/followers",
"following_url": "https://api.github.com/users/OyvindTafjord/following{/other_user}",
"gists_url": "https://api.github.com/users/OyvindTafjord/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OyvindTafjord/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OyvindTafjord/subscriptions",
"organizations_url": "https://api.github.com/users/OyvindTafjord/orgs",
"repos_url": "https://api.github.com/users/OyvindTafjord/repos",
"events_url": "https://api.github.com/users/OyvindTafjord/events{/privacy}",
"received_events_url": "https://api.github.com/users/OyvindTafjord/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks!"
] | 1,621,969,049,000 | 1,622,048,056,000 | 1,622,047,207,000 | CONTRIBUTOR | null | The dataset tags were provided by Peter Clark following the guide. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2405/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2405",
"html_url": "https://github.com/huggingface/datasets/pull/2405",
"diff_url": "https://github.com/huggingface/datasets/pull/2405.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2405.patch",
"merged_at": 1622047207000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2404/comments | https://api.github.com/repos/huggingface/datasets/issues/2404/events | https://github.com/huggingface/datasets/pull/2404 | 901,179,832 | MDExOlB1bGxSZXF1ZXN0NjUyNjYzOTcz | 2,404 | Paperswithcode dataset mapping | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"messed up my branch, repushing",
"live mapping can be found at https://huggingface.co/api/pwc/datasets-mapping and will be kept up to date going forward"
] | 1,621,966,466,000 | 1,622,028,085,000 | 1,622,027,838,000 | MEMBER | null | This is a continuation of https://github.com/huggingface/huggingface_hub/pull/43, encoded directly inside dataset cards.
As discussed:
- `paperswithcode_id: null` when the dataset doesn't exist on paperswithcode's side.
- I've added this new key at the end of the yaml instead of ordering all keys alphabetically as pyyaml's default. No strong opinion on that one though
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2404/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2404",
"html_url": "https://github.com/huggingface/datasets/pull/2404",
"diff_url": "https://github.com/huggingface/datasets/pull/2404.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2404.patch",
"merged_at": 1622027838000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2403/comments | https://api.github.com/repos/huggingface/datasets/issues/2403/events | https://github.com/huggingface/datasets/pull/2403 | 900,059,014 | MDExOlB1bGxSZXF1ZXN0NjUxNjcxMTMw | 2,403 | Free datasets with cache file in temp dir on exit | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,621,894,511,000 | 1,622,049,919,000 | 1,622,047,169,000 | CONTRIBUTOR | null | This PR properly cleans up the memory-mapped tables that reference the cache files inside the temp dir.
Since the built-in `_finalizer` of `TemporaryDirectory` can't be modified, this PR defines its own `TemporaryDirectory` class that accepts a custom clean-up function.
Fixes #2402 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2403/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2403",
"html_url": "https://github.com/huggingface/datasets/pull/2403",
"diff_url": "https://github.com/huggingface/datasets/pull/2403.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2403.patch",
"merged_at": 1622047169000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2402 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2402/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2402/comments | https://api.github.com/repos/huggingface/datasets/issues/2402/events | https://github.com/huggingface/datasets/issues/2402 | 900,025,329 | MDU6SXNzdWU5MDAwMjUzMjk= | 2,402 | PermissionError on Windows when using temp dir for caching | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,621,891,379,000 | 1,622,047,169,000 | 1,622,047,169,000 | CONTRIBUTOR | null | Currently, the following code raises a PermissionError on master if working on Windows:
```python
# run as a script or call exit() in REPL to initiate the temp dir cleanup
from datasets import *
d = load_dataset("sst", split="train", keep_in_memory=False)
set_caching_enabled(False)
d.map(lambda ex: ex)
```
Error stack trace:
```
Traceback (most recent call last):
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\weakref.py", line 624, in _exitfunc
f()
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\weakref.py", line 548, in __call__
return info.func(*info.args, **(info.kwargs or {}))
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\tempfile.py", line 799, in _cleanup
_shutil.rmtree(name)
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\shutil.py", line 500, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\shutil.py", line 395, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\shutil.py", line 393, in _rmtree_unsafe
os.unlink(fullname)
PermissionError: [WinError 5] Access is denied: 'C:\\Users\\Mario\\AppData\\Local\\Temp\\tmp20epyhmq\\cache-87a87ffb5a956e68.arrow'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2402/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2401 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2401/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2401/comments | https://api.github.com/repos/huggingface/datasets/issues/2401/events | https://github.com/huggingface/datasets/issues/2401 | 899,910,521 | MDU6SXNzdWU4OTk5MTA1MjE= | 2,401 | load_dataset('natural_questions') fails with "ValueError: External features info don't match the dataset" | {
"login": "jonrbates",
"id": 15602718,
"node_id": "MDQ6VXNlcjE1NjAyNzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/15602718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonrbates",
"html_url": "https://github.com/jonrbates",
"followers_url": "https://api.github.com/users/jonrbates/followers",
"following_url": "https://api.github.com/users/jonrbates/following{/other_user}",
"gists_url": "https://api.github.com/users/jonrbates/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonrbates/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonrbates/subscriptions",
"organizations_url": "https://api.github.com/users/jonrbates/orgs",
"repos_url": "https://api.github.com/users/jonrbates/repos",
"events_url": "https://api.github.com/users/jonrbates/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonrbates/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I faced the similar problem. Downgrading datasets to 1.5.0 fixed it.",
"Thanks for reporting, I'm looking into it",
"I just opened #2438 to fix this :)",
"Hi ! This has been fixed in the 1.8.0 release of `datasets`"
] | 1,621,881,533,000 | 1,623,229,645,000 | 1,623,229,645,000 | NONE | null | ## Describe the bug
load_dataset('natural_questions') throws ValueError
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset('natural_questions', split='validation[:10]')
```
## Expected results
Call to load_dataset returns data.
## Actual results
```
Using custom data configuration default
Reusing dataset natural_questions (/mnt/d/huggingface/datasets/natural_questions/default/0.0.2/19bc04755018a3ad02ee74f7045cde4ba9b4162cb64450a87030ab786b123b76)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-2-d55ab8a8cc1c> in <module>
----> 1 datasets = load_dataset('natural_questions', split='validation[:10]', cache_dir='/mnt/d/huggingface/datasets')
~/miniconda3/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
757 )
--> 758 ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
759 if save_infos:
760 builder_instance._save_infos()
~/miniconda3/lib/python3.8/site-packages/datasets/builder.py in as_dataset(self, split, run_post_process, ignore_verifications, in_memory)
735
736 # Create a dataset for each of the given splits
--> 737 datasets = utils.map_nested(
738 partial(
739 self._build_single_dataset,
~/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)
193 # Singleton
194 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 195 return function(data_struct)
196
197 disable_tqdm = bool(logger.getEffectiveLevel() > INFO)
~/miniconda3/lib/python3.8/site-packages/datasets/builder.py in _build_single_dataset(self, split, run_post_process, ignore_verifications, in_memory)
762
763 # Build base dataset
--> 764 ds = self._as_dataset(
765 split=split,
766 in_memory=in_memory,
~/miniconda3/lib/python3.8/site-packages/datasets/builder.py in _as_dataset(self, split, in_memory)
838 in_memory=in_memory,
839 )
--> 840 return Dataset(**dataset_kwargs)
841
842 def _post_process(self, dataset: Dataset, resources_paths: Dict[str, str]) -> Optional[Dataset]:
~/miniconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint)
271 assert self._fingerprint is not None, "Fingerprint can't be None in a Dataset object"
272 if self.info.features.type != inferred_features.type:
--> 273 raise ValueError(
274 "External features info don't match the dataset:\nGot\n{}\nwith type\n{}\n\nbut expected something like\n{}\nwith type\n{}".format(
275 self.info.features, self.info.features.type, inferred_features, inferred_features.type
ValueError: External features info don't match the dataset:
Got
{'id': Value(dtype='string', id=None), 'document': {'title': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None), 'html': Value(dtype='string', id=None), 'tokens': Sequence(feature={'token': Value(dtype='string', id=None), 'is_html': Value(dtype='bool', id=None)}, length=-1, id=None)}, 'question': {'text': Value(dtype='string', id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'annotations': Sequence(feature={'id': Value(dtype='string', id=None), 'long_answer': {'start_token': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'end_byte': Value(dtype='int64', id=None)}, 'short_answers': Sequence(feature={'start_token': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'end_byte': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)}, length=-1, id=None), 'yes_no_answer': ClassLabel(num_classes=2, names=['NO', 'YES'], names_file=None, id=None)}, length=-1, id=None)}
with type
struct<annotations: struct<id: list<item: string>, long_answer: list<item: struct<start_token: int64, end_token: int64, start_byte: int64, end_byte: int64>>, short_answers: list<item: struct<end_byte: list<item: int64>, end_token: list<item: int64>, start_byte: list<item: int64>, start_token: list<item: int64>, text: list<item: string>>>, yes_no_answer: list<item: int64>>, document: struct<title: string, url: string, html: string, tokens: struct<is_html: list<item: bool>, token: list<item: string>>>, id: string, question: struct<text: string, tokens: list<item: string>>>
but expected something like
{'id': Value(dtype='string', id=None), 'document': {'html': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'tokens': {'is_html': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None), 'token': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'url': Value(dtype='string', id=None)}, 'question': {'text': Value(dtype='string', id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'annotations': {'id': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'long_answer': [{'end_byte': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'start_token': Value(dtype='int64', id=None)}], 'short_answers': [{'end_byte': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'end_token': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'start_byte': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'start_token': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}], 'yes_no_answer': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}}
with type
struct<annotations: struct<id: list<item: string>, long_answer: list<item: struct<end_byte: int64, end_token: int64, start_byte: int64, start_token: int64>>, short_answers: list<item: struct<end_byte: list<item: int64>, end_token: list<item: int64>, start_byte: list<item: int64>, start_token: list<item: int64>, text: list<item: string>>>, yes_no_answer: list<item: int64>>, document: struct<html: string, title: string, tokens: struct<is_html: list<item: bool>, token: list<item: string>>, url: string>, id: string, question: struct<text: string, tokens: list<item: string>>>
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2
- Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2401/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2400 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2400/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2400/comments | https://api.github.com/repos/huggingface/datasets/issues/2400/events | https://github.com/huggingface/datasets/issues/2400 | 899,867,212 | MDU6SXNzdWU4OTk4NjcyMTI= | 2,400 | Concatenate several datasets with removed columns is not working. | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi,\r\n\r\ndid you fill out the env info section manually or by copy-pasting the output of the `datasets-cli env` command?\r\n\r\nThis code should work without issues on 1.6.2 version (I'm working on master (1.6.2.dev0 version) and can't reproduce this error).",
"@mariosasko you are right I was still on `1.5.0`. "
] | 1,621,878,015,000 | 1,621,921,921,000 | 1,621,921,919,000 | MEMBER | null | ## Describe the bug
You can't concatenate datasets when you removed columns before.
## Steps to reproduce the bug
```python
from datasets import load_dataset, concatenate_datasets
wikiann= load_dataset("wikiann","en")
wikiann["train"] = wikiann["train"].remove_columns(["langs","spans"])
wikiann["test"] = wikiann["test"].remove_columns(["langs","spans"])
assert wikiann["train"].features.type == wikiann["test"].features.type
concate = concatenate_datasets([wikiann["train"],wikiann["test"]])
```
## Expected results
Merged dataset
## Actual results
```python
ValueError: External features info don't match the dataset:
Got
{'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'ner_tags': Sequence(feature=ClassLabel(num_classes=7, names=['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC'], names_file=None, id=None), length=-1, id=None), 'langs': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'spans': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
with type
struct<langs: list<item: string>, ner_tags: list<item: int64>, spans: list<item: string>, tokens: list<item: string>>
but expected something like
{'ner_tags': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
with type
struct<ner_tags: list<item: int64>, tokens: list<item: string>>
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: ~1.6.2~ 1.5.0
- Platform: macos
- Python version: 3.8.5
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2400/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2399/comments | https://api.github.com/repos/huggingface/datasets/issues/2399/events | https://github.com/huggingface/datasets/pull/2399 | 899,853,610 | MDExOlB1bGxSZXF1ZXN0NjUxNDk0OTc2 | 2,399 | Add env variable for MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thank you for clarifying the precedence, @albertvillanova \r\n\r\nIsn't it typically the case where env vars have the highest precedence? \r\n\r\nIn my understanding the point of env vars is to be able to override software w/o needing to touch the code. \r\n\r\nPlease correct me if this is not so in the general case.",
"Hi @stas00, \r\n\r\nWell, I'm not an expert on this topic, but the precedence hierarchy I have normally found is from higher to lower:\r\n- command line parameters\r\n- env vars\r\n- config files\r\nSo yes, normally env vars have precedence over configuration files.\r\n\r\nAnyway, for Datasets, there are no configuration files. The _in-memory_ config is set from default values or env vars (which have precedence over default values). But this is done at import.\r\n\r\nHowever, once the library is imported, the user can modify the in-memory config, and this will have precedence over the rest of mechanisms (which take place only at import).",
"In my limited experience env vars are typically above cmd line args, so that one can override scripts with cmd lines using env vars, but usually one then uses env vars inside cmd line args, so it's loud and clear.\r\n\r\nFor example specifying a specific gpu number on a command line will depend on `CUDA_VISIBLE_DEVICES` so gpu0 will be different if someone sets `CUDA_VISIBLE_DEVICES=2,3` vs `CUDA_VISIBLE_DEVICES=0,1`.\r\n\r\n> However, once the library is imported, the user can modify the in-memory config, and this will have precedence over the rest of mechanisms (which take place only at import).\r\n\r\nAnd this is exactly the problem we are trying to solve here. For a good reason HF examples don't want to use `keep_in_memory=False`, and they may choose to now set `datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES` and which means we again can't override it via env var.\r\n\r\n",
"oops, sorry, didn't think earlier - do we need to prefix this with `HF_DATASETS` or `HF_` like all the other env vars? or is it long enough already to be unique - it's just not telling the user in the config file what projet this variable is for...",
"You're right, I just opened https://github.com/huggingface/datasets/pull/2409"
] | 1,621,876,755,000 | 1,622,106,435,000 | 1,622,045,274,000 | MEMBER | null | Add env variable for `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES`.
This will allow to turn off default behavior: loading in memory (and not caching) small datasets.
Fix #2387. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2399/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2399",
"html_url": "https://github.com/huggingface/datasets/pull/2399",
"diff_url": "https://github.com/huggingface/datasets/pull/2399.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2399.patch",
"merged_at": 1622045274000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2398 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2398/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2398/comments | https://api.github.com/repos/huggingface/datasets/issues/2398/events | https://github.com/huggingface/datasets/issues/2398 | 899,511,837 | MDU6SXNzdWU4OTk1MTE4Mzc= | 2,398 | News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs | {
"login": "anassalamah",
"id": 8571003,
"node_id": "MDQ6VXNlcjg1NzEwMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8571003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anassalamah",
"html_url": "https://github.com/anassalamah",
"followers_url": "https://api.github.com/users/anassalamah/followers",
"following_url": "https://api.github.com/users/anassalamah/following{/other_user}",
"gists_url": "https://api.github.com/users/anassalamah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anassalamah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anassalamah/subscriptions",
"organizations_url": "https://api.github.com/users/anassalamah/orgs",
"repos_url": "https://api.github.com/users/anassalamah/repos",
"events_url": "https://api.github.com/users/anassalamah/events{/privacy}",
"received_events_url": "https://api.github.com/users/anassalamah/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,621,850,614,000 | 1,621,850,614,000 | null | NONE | null | I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]')
# filtering out examples that are not ar-en translations but ar-hi
val_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True)
```
* I'm fairly new to using datasets so I might be doing something wrong | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2398/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2397 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2397/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2397/comments | https://api.github.com/repos/huggingface/datasets/issues/2397/events | https://github.com/huggingface/datasets/pull/2397 | 899,427,378 | MDExOlB1bGxSZXF1ZXN0NjUxMTMxMTY0 | 2,397 | Fix number of classes in indic_glue sna.bn dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq there are many things missing in the README.md file, but this correction is right despite not passing the validation tests...",
"Yes indeed. We run the validation in all modified readme because we think that it is the time when contributors are the most likely to fix a dataset card - or it will never be done"
] | 1,621,844,335,000 | 1,621,960,336,000 | 1,621,960,336,000 | MEMBER | null | As read in the [paper](https://www.aclweb.org/anthology/2020.findings-emnlp.445.pdf), Table 11. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2397/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2397/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2397",
"html_url": "https://github.com/huggingface/datasets/pull/2397",
"diff_url": "https://github.com/huggingface/datasets/pull/2397.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2397.patch",
"merged_at": 1621960336000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2396 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2396/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2396/comments | https://api.github.com/repos/huggingface/datasets/issues/2396/events | https://github.com/huggingface/datasets/issues/2396 | 899,016,308 | MDU6SXNzdWU4OTkwMTYzMDg= | 2,396 | strange datasets from OSCAR corpus | {
"login": "jerryIsHere",
"id": 50871412,
"node_id": "MDQ6VXNlcjUwODcxNDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerryIsHere",
"html_url": "https://github.com/jerryIsHere",
"followers_url": "https://api.github.com/users/jerryIsHere/followers",
"following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}",
"gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions",
"organizations_url": "https://api.github.com/users/jerryIsHere/orgs",
"repos_url": "https://api.github.com/users/jerryIsHere/repos",
"events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}",
"received_events_url": "https://api.github.com/users/jerryIsHere/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! Thanks for reporting\r\ncc @pjox is this an issue from the data ?\r\n\r\nAnyway we should at least mention that OSCAR could contain such contents in the dataset card, you're totally right @jerryIsHere ",
"Hi @jerryIsHere , sorry for the late response! Sadly this is normal, the problem comes form fasttext's classifier which we used to create the original corpus. In general the classifier is not really capable of properly recognizing Yue Chineese so the file ends un being just noise from Common Crawl. Some of these problems with OSCAR were already discussed [here](https://arxiv.org/pdf/2103.12028.pdf) but we are working on explicitly documenting the problems by language on our website. In fact, could please you open an issue on [our repo](https://github.com/oscar-corpus/oscar-website/issues) as well so that we can track it?"
] | 1,621,775,162,000 | 1,623,938,077,000 | null | CONTRIBUTOR | null | ![image](https://user-images.githubusercontent.com/50871412/119260850-4f876b80-bc07-11eb-8894-124302600643.png)
![image](https://user-images.githubusercontent.com/50871412/119260875-675eef80-bc07-11eb-9da4-ee27567054ac.png)
From the [official site ](https://oscar-corpus.com/), the Yue Chinese dataset should have 2.2KB data.
7 training instances is obviously not a right number.
As I can read Yue Chinese, I call tell the last instance is definitely not something that would appear on Common Crawl.
And even if you don't read Yue Chinese, you can tell the first six instance are problematic.
(It is embarrassing, as the 7 training instances look exactly like something from a pornographic novel or flitting messages in a chat of a dating app)
It might not be the problem of the huggingface/datasets implementation, because when I tried to download the dataset from the official site, I found out that the zip file is corrupted.
I will try to inform the host of OSCAR corpus later.
Awy a remake about this dataset in huggingface/datasets is needed, perhaps after the host of the dataset fixes the issue.
> Hi @jerryIsHere , sorry for the late response! Sadly this is normal, the problem comes form fasttext's classifier which we used to create the original corpus. In general the classifier is not really capable of properly recognizing Yue Chineese so the file ends un being just noise from Common Crawl. Some of these problems with OSCAR were already discussed [here](https://arxiv.org/pdf/2103.12028.pdf) but we are working on explicitly documenting the problems by language on our website. In fact, could please you open an issue on [our repo](https://github.com/oscar-corpus/oscar-website/issues) as well so that we can track it?
Thanks a lot, the new post is here:
https://github.com/oscar-corpus/oscar-website/issues/11 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2396/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2395 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2395/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2395/comments | https://api.github.com/repos/huggingface/datasets/issues/2395/events | https://github.com/huggingface/datasets/pull/2395 | 898,762,730 | MDExOlB1bGxSZXF1ZXN0NjUwNTk3NjI0 | 2,395 | `pretty_name` for dataset in YAML tags | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Initially I removed the ` - ` since there was only one `pretty_name` per config but turns out it was breaking here in `from_yaml_string`https://github.com/huggingface/datasets/blob/74751e3f98c74d22c48c6beb1fab0c13b5dfd075/src/datasets/utils/metadata.py#L197 in `/utils/metadata.py`",
"@lhoestq I guess this will also need some validation?",
"Looks like the parser doesn't allow things like\r\n```\r\npretty_name:\r\n config_name1: My awesome config number 1\r\n config_name2: My amazing config number 2\r\n```\r\ntherefore you had to use `-` and consider them as a list.\r\n\r\nI would be nice to add support for this case in the validator.\r\n\r\nThere's one thing though: the DatasetMetadata object currently corresponds to the yaml tags that are flattened: the config names are just ignored, and the lists are concatenated.\r\n\r\nTherefore I think we would potentially need to instantiate several `DatasetMetadata` objects: one per config. Otherwise we would end up with a list of several pretty_name while we actually need at most 1 per config.\r\n\r\nWhat do you think @gchhablani ?",
"I was thinking of returning `metada_dict` (on line 193) whenever `load_dataset_card` is called (we can pass an extra parameter to `from_readme` or `from_yaml_string` for that to happen).\r\n\r\nOne just needs config_name as key for the dictionary inside `pretty_name` dict and for single config, there would be only one value to print. We can do this for other fields as well like `size_categories`, `languages` etc. This will obviate the need to flatten the YAML tags so that don't have to instantiate several DatasetMetadata objects. What do you guys think @lhoestq @gchhablani? \r\n\r\nUpdate: I was thinking of returning the whole dictionary before flattening so that user can access whatever they want with specific configs. Let's say [this](https://pastebin.com/eJ84314f) is my `metadata_dict` before flattening (the loaded YAML string), so instead of validating it and then returning the items individually we can return it just after loading the YAML string.",
"Hi @lhoestq @bhavitvyamalik \r\n\r\n@bhavitvyamalik, I'm not sure I understand your approach, can you please elaborate? The `metadata_dict` is flattened before instantiating the object, do you want to remove that? Still confused.\r\n\r\nFew things come to my mind after going through this PR. They might not be entirely relevant to the current task, but I'm just trying to think about possible cases and discuss them here.\r\n\r\n1. Instead of instantiating a new `DatasetMetadata` for each config with flattened tags, why can't we make it more flexible and validate only non-dict items? However, in that case, the types wouldn't be as strict for the class attributes. It would also not work for cases that are like `Dict[str,List[Dict[str,str]]`, but I guess that won't be needed anyway in the foreseeable future?\r\n\r\n Ideally, it would be something like - Check the metadata tag type (root), do a DFS, and find the non-dict objects (leaves), and validate them. Is this an overkill to handle the problem?\r\n2. For single-config datasets, there can be slightly different validation for `pretty_names`, than for multi-config. The user shouldn't need to provide a config name for single-config datasets, wdyt @bhavitvyamalik @lhoestq? Either way, for multi-config, the validation can use the dictionary keys in the path to that leaf node to verify `pretty_names: ... (config)` as well. This will check whether the config name is same as the key (might be unnecessary but prevents typos, so less work for the reviewer(s)). For future, however, it might be beneficial to have something like this.\r\n3. Should we have a default config name for single-config datasets? People use any string they feel like. I've seen `plain_text`, `default` and the dataset name. I've used `image` for a few datasets myself AFAIR. For smarter validation (again, a future case ;-;), it'd be easier for us to have a simple rule for naming configs in single-config datasets. Wdyt @lhoestq?",
"Btw, `pretty_names` can also be used to handle this during validation :P \r\n\r\n```\r\n-# Dataset Card for [Dataset Name]\r\n+# Dataset Card for Allegro Reviews\r\n```\r\n\r\nThis is where `DatasetMetadata` and `ReadMe` should be combined. But there are very few overlaps, I guess.\r\n\r\n\n@bhavitvyamalik @lhoestq What about adding a pretty name across all configs, and then config-specific names?\n\nLike\n\n```yaml\npretty_names:\n all_configs: X (dataset_name)\n config_1: X1 (config_1_name)\n config_2: X2 (config_2_name)\n```\nThen, using the `metadata_dict`, the ReadMe header can be validated against `X`.\n\nSorry if I'm throwing too many ideas at once.",
"@bhavitvyamalik\r\n\r\nNow, I think I better understand what you're saying. So you want to skip validation for the unflattened metadata and just return it? And let the validation run for the flattened version?",
"Exactly! Validation is important but once the YAML tags are validated I feel we shouldn't do that again while calling `load_dataset_card`. +1 for default config name for single-config datasets.",
"@bhavitvyamalik\r\nActually, I made the `ReadMe` validation similar to `DatasetMetadata` validation and the class was validating the metadata during the creation. \r\n\r\nMaybe we need to have a separate validation method instead of having it in `__post_init__`? Wdyt @lhoestq? \r\n\r\nI'm sensing too many things to look into. It'd be great to discuss these sometime. \r\n\r\nBut if this PR is urgent then @bhavitvyamalik's logic seems good to me. It doesn't need major modifications in validation.",
"> Maybe we need to have a separate validation method instead of having it in __post_init__? Wdyt @lhoestq?\r\n\r\nWe can definitely have a `is_valid()` method instead of doing it in the post init.\r\n\r\n> What about adding a pretty name across all configs, and then config-specific names?\r\n\r\nLet's keep things simple to starts with. If we can allow both single-config and multi-config cases it would already be great :)\r\n\r\nfor single-config:\r\n```yaml\r\npretty_name: Allegro Reviews\r\n```\r\n\r\nfor multi-config:\r\n```yaml\r\npretty_name:\r\n mrpc: Microsoft Research Paraphrase Corpus (MRPC)\r\n sst2: Stanford Sentiment Treebank\r\n ...\r\n```\r\n\r\nTo support the multi-config case I see two options:\r\n1. Don't allow DatasetMetadata to have dictionaries but instead have separate DatasetMetadata objects per config\r\n2. allow DatasetMetadata to have dictionaries. It implies to remove the flattening step. Then we could get metadata for a specific config this way for example:\r\n```python\r\nfrom datasets import load_dataset_card\r\n\r\nglue_dataset_card = load_dataset_card(\"glue\")\r\nprint(glue_dataset_card.metadata)\r\n# DatasetMetatada object with dictionaries since there are many configs\r\nprint(glue_dataset_card.metadata.get_metadata_for_config(\"mrpc\"))\r\n# DatasetMetatada object with no dictionaries since there are only the mrpc tags\r\n```\r\n\r\nLet me know what you think or if you have other ideas.",
"I think Option 2 is better.\n\nJust to clarify, will `get_metadata_for_config` also return common details, like language, say?",
"> Just to clarify, will get_metadata_for_config also return common details, like language, say?\r\n\r\nYes that would be more convenient IMO. For example a dataset card like this\r\n```yaml\r\nlanguages:\r\n- en\r\npretty_name:\r\n config1: Pretty Name for Config 1\r\n config3: Pretty Name for Config 2\r\n```\r\n\r\nthen `metadat.get_metadata_for_config(\"config1\")` would return something like\r\n```python\r\nDatasetMetadata(languages=[\"en\"], pretty_name=\"Pretty Name for Config 1\")\r\n```",
"@lhoestq, should we do this post-processing in `load_dataset_card` by returning unflattened dictionary from `DatasetMetadata` or send this from `DatasetMetadata`? Since there isn't much to do I feel once we have the unflattened dictionary",
"Not sure I understand the difference @bhavitvyamalik , could you elaborate please ?",
"I was talking about this unflattened dictionary:\r\n\r\n> I was thinking of returning the whole dictionary before flattening so that user can access whatever they want with specific configs. Let's say [this](https://pastebin.com/eJ84314f) is my metadata_dict before flattening (the loaded YAML string), so instead of validating it and then returning the items individually we can return it just after loading the YAML string.\r\n\r\nPost-processing meant extracting config-specific fields from this dictionary and then return this `languages=[\"en\"], pretty_name=\"Pretty Name for Config 1\"`",
"I still don't understand what you mean by \"returning unflattened dictionary from DatasetMetadata or send this from DatasetMetadata\", sorry. Can you give an example or rephrase this ?\r\n\r\nIMO load_dataset_card can return a dataset card object with a metadata field. If the metadata isn't flat (i.e. it has several configs), you can get the flat metadata of 1 specific config with `get_metadata_for_config`. But of course if you have better ideas or suggestions, we can discuss this",
"@lhoestq, I think he is saying whatever `get_metadata_for_config` is doing can be done in `load_dataset_card` by taking the unflattened `metadata_dict` as input.\r\n\r\n@bhavitvyamalik, I think it'd be better to have this \"post-processing\" in `DatasetMetadata` instead of `load_dataset_card`, as @lhoestq has shown. I'll quickly get on that.\r\n\r\n---\r\nThree things that are to be changed in `DatasetMetadata`:\r\n1. Allow Non-flat elements and their validation.\r\n2. Create a method to get metadata by config name.\r\n3. Create a `validate()` method.\r\n\r\nOnce that is done, this PR can be updated and reviewed, wdys?",
"Thanks @gchhablani for the help ! Now that https://github.com/huggingface/datasets/pull/2436 is merged you can remove the `-` in the pretty_name @bhavitvyamalik :)"
] | 1,621,675,485,000 | 1,624,544,051,000 | null | CONTRIBUTOR | null | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2395/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2395/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2395",
"html_url": "https://github.com/huggingface/datasets/pull/2395",
"diff_url": "https://github.com/huggingface/datasets/pull/2395.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2395.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2392 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2392/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2392/comments | https://api.github.com/repos/huggingface/datasets/issues/2392/events | https://github.com/huggingface/datasets/pull/2392 | 898,156,795 | MDExOlB1bGxSZXF1ZXN0NjUwMDYxOTE3 | 2,392 | Update text classification template labels in DatasetInfo __post_init__ | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"If I'm not mistaken, one way to fix this would be to drop the task templates when copying the info by inserting `dataset.info.task_templates = None` before the `Dataset.cast` call in `Dataset.prepare_for_task`. Moreover, we should do this change independently of the KeyError being raised because currently the following is possible:\r\n```python\r\ndset = load_dataset(\"some_dataset\") # let's say 'some_dataset' supports text classification and question answering\r\ndset_tc = dset.prepare_for_task(\"text-classification\")\r\ndset_tc.preprare_for_task(\"question-answering\") # this should raise an error because the schema is no longer valid for this task; currently this fails on 'rename_columns'\r\n```\r\nI see 2 options:\r\n1. to drop the task templates after the first `Dataset.prepare_for_task` call\r\n2. to save only the tasks compatible with the new schema after Dataset.prepare_for_task` (but then we have to update the column names of the compatible tasks to make sure the column mapping is still valid) ",
"> If I'm not mistaken, one way to fix this would be to drop the task templates when copying the info by inserting `dataset.info.task_templates = None` before the `Dataset.cast` call in `Dataset.prepare_for_task`. Moreover, we should do this change independently of the KeyError being raised because currently the following is possible:\r\n> \r\n> ```python\r\n> dset = load_dataset(\"some_dataset\") # let's say 'some_dataset' supports text classification and question answering\r\n> dset_tc = dset.prepare_for_task(\"text-classification\")\r\n> dset_tc.preprare_for_task(\"question-answering\") # this should raise an error because the schema is no longer valid for this task; currently this fails on 'rename_columns'\r\n> ```\r\n> \r\n> I see 2 options:\r\n> \r\n> 1. to drop the task templates after the first `Dataset.prepare_for_task` call\r\n> 2. to save only the tasks compatible with the new schema after Dataset.prepare_for_task` (but then we have to update the column names of the compatible tasks to make sure the column mapping is still valid)\r\n\r\nthanks for the great idea @mariosasko and for spotting the problem with sequential task preparation! i am in favour of your option (1) since it is simple and saves us from having to keep track of the column mappings across multiple steps. \r\n\r\ni've implemented the change and refactored the tests to account for the new approach (including a new test that the templates are flushed after we call `prepare_for_task`). perhaps the slightly inelegant aspect here is that if we want to allow the user to set `labels` in the `TextClassification` template, then we have two places (`DatasetInfo.__post_init__` and `TextClassification.__post_init__`) where we need to update `label_schema`. \r\n\r\non the other hand, dropping `labels` from the `TextClassification` signature would have the nice effect that users only have to think about column names when defining their tasks.\r\n\r\nin any case, i think it would be a good idea to merge #2376 soon as the current PR is touching a lot of the same places in the codebase ๐ \r\n",
"cc @SBrandeis who might also be interested in this feature :)",
"Tests are failing only because the `emotion` dataset card doesn't pass our dataset card validator (tags are missing), you can ignore this since it's unrelated to this PR.",
"@lhoestq @SBrandeis i've fixed the tests and think this is now in a good state for another review :)",
"Maybe @SBrandeis you can also take a look to make sure you're fine with it ?"
] | 1,621,610,981,000 | 1,622,201,855,000 | 1,622,201,852,000 | MEMBER | null | This PR implements the idea discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`. The main reason for doing so is so avoid duplicating the label definitions in both `DatasetInfo.features` and `DatasetInfo.task_templates`.
To avoid storing state in `DatasetInfo.__post_init__`, the current implementation flushes `DatasetInfo.task_templates` before the features are cast in `Dataset.prepare_for_task` (thanks to @mariosasko for this idea!).
Here is an example of the current workflow:
```python
ds1 = load_dataset("./datasets/emotion/")
# cast features and flush templates
ds2 = ds1.prepare_for_task("text-classification")
assert ds2.info.task_templates is None
```
Note that if users want to pass a `TextClassification` template to `prepare_for_task`, we require them to set `TextClassification.labels` to match the dataset's features corresponding to `label_column`:
```python
ds1 = load_dataset("./datasets/emotion/")
# TextClassification.labels is None by default => invalid template
task = TextClassification(text_column="text", label_column="label")
# Raises ValueError
ds1.prepare_for_task(task)
# Specifying the labels => valid template
task = TextClassification(text_column="text", label_column="label", labels=['anger', 'fear', 'joy', 'love', 'sadness', 'surprise'])
ds1.prepare_for_task(task)
```
This PR also adds:
* New tests + fixed some old tests that weren't testing `assertRaises` properly
* A decorator to share docstrings across common functions. This allows us to document `DatasetDict.prepare_for_task` and `Dataset.prepare_for_task` in one place.
* Fixes to avoid side-effects from in-place replacements of `DatasetInfo.task_templates` in `DatasetInfo.__post_init__`. Thanks to @lhoestq for figuring this out!
* Removal of `FeaturesWithLazyClassLabel` since we now create a new instance of `TextClassification` in `DatasetInfo.__post_init__` and avoid the side-effects first pointed out by @mariosasko
### PR Description from original WIP
Hi @yjernite and @lhoestq, here's a first stab at the suggestion discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`.
One problem I've spotted is that my current implementation introduces state into the `__post_init__`:
* When we call `load_dataset`, `DatasetInfo.features` are the "raw" features without any casting so we can access the column names by the `label_column` specified in `TextClassification`
* When we call `Dataset.prepare_for_task` we run into a problem because the `DatasetInfo.features` are first cast into the new schema which triggers a `KeyError` when we update the infos [here](https://github.com/huggingface/datasets/blob/8b2a78520828e0cc13c14a31f413a5395ef25110/src/datasets/arrow_dataset.py#L1959).
Here's an explicit example of what I mean with the stack trace appended below:
```python
from datasets import load_dataset
# this works
ds = load_dataset("emotion")
# we can verify the task template is correctly set
ds["train"].info.task_templates # returns [TextClassification(labels=('sadness', 'joy', 'love', 'anger', 'fear', 'surprise'), text_column='text', label_column='label')]
# but this fails because the _post_init__ is looking for the original column names
ds.prepare_for_task("text-classification")
```
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-4-54a43019b319> in <module>
----> 1 ds.prepare_for_task("text-classification")
~/git/datasets/src/datasets/dataset_dict.py in prepare_for_task(self, task)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0)
807 """
808 self._check_values_type()
--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})
~/git/datasets/src/datasets/arrow_dataset.py in prepare_for_task(self, task)
1421 dataset = self.remove_columns(columns_to_drop)
1422 dataset = dataset.rename_columns(column_mapping)
-> 1423 dataset = dataset.cast(features=template.features)
1424 return dataset
1425
~/git/datasets/src/datasets/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)
970 format = self.format
971 dataset = self.with_format("arrow")
--> 972 dataset = dataset.map(
973 lambda t: t.cast(schema),
974 batched=True,
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1583
1584 if num_proc is None or num_proc == 1:
-> 1585 return self._map_single(
1586 function=function,
1587 with_indices=with_indices,
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
173 }
174 # apply actual function
--> 175 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
176 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
177 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
338 # Call actual function
339
--> 340 out = func(self, *args, **kwargs)
341
342 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
1959 if update_data:
1960 # Create new Dataset from buffer or file
-> 1961 info = self.info.copy()
1962 info.features = writer._features
1963 if buf_writer is None:
~/git/datasets/src/datasets/info.py in copy(self)
274
275 def copy(self) -> "DatasetInfo":
--> 276 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
277
278
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
174 # The reason is that Dataset.prepare_for_task calls Dataset.cast which converts the
175 # DatasetInfo.features to the new schema and thus template.label_column is no longer a valid key
--> 176 object.__setattr__(template, "labels", tuple(self.features[template.label_column].names))
177 template.label_schema["labels"] = ClassLabel(names=template.labels)
178 self.task_templates[idx] = template
KeyError: 'label'
```
What do you think? I did this a bit quickly, so maybe I'm overlooking something obvious :) One thing would be to only update the labels of the task template on load, but this seems a bit hacky IMO | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2392/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2392",
"html_url": "https://github.com/huggingface/datasets/pull/2392",
"diff_url": "https://github.com/huggingface/datasets/pull/2392.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2392.patch",
"merged_at": 1622201852000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2391 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2391/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2391/comments | https://api.github.com/repos/huggingface/datasets/issues/2391/events | https://github.com/huggingface/datasets/issues/2391 | 898,128,099 | MDU6SXNzdWU4OTgxMjgwOTk= | 2,391 | Missing original answers in kilt-TriviaQA | {
"login": "PaulLerner",
"id": 25532159,
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulLerner",
"html_url": "https://github.com/PaulLerner",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"That could be useful indeed! Feel free to open a PR on the dataset card if you already have some code that runs, otherwise we'll take care of it soon :) ",
"I can open a PR but there is 2 details to fix:\r\n- the name for the corresponding key (e.g. `original_answer`)\r\n- how to implement it: Iโm not sure what happens when you map `lambda x: {'input': ...}`ย as it keeps the other keys (e.g. `output`) intact but here since we want to set a nested value (e.g. `x['output']['original_answer']`) I implemented it with a regular function (not lambda), see below\r\n\r\n```py\r\ndef add_original_answer(x, trivia_qa, triviaqa_map):\r\n i = triviaqa_map[x['id']]\r\n x['output']['original_answer'] = trivia_qa['validation'][i]['answer']['value']\r\n return x\r\n```"
] | 1,621,609,027,000 | 1,623,691,751,000 | 1,623,691,751,000 | CONTRIBUTOR | null | I previously opened an issue at https://github.com/facebookresearch/KILT/issues/42 but from the answer of @fabiopetroni it seems that the problem comes from HF-datasets
## Describe the bug
The `answer` field in kilt-TriviaQA, e.g. `kilt_tasks['train_triviaqa'][0]['output']['answer']` contains a list of alternative answer which are accepted for the question.
However it'd be nice to know the original answer to the question (the only fields in `output` are `'answer', 'meta', 'provenance'`)
## How to fix
It can be fixed by retrieving the original answer from the original TriviaQA (e.g. `trivia_qa['train'][0]['answer']['value']`), perhaps at the same place as here where one retrieves the questions https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md#loading-the-kilt-knowledge-source-and-task-data
cc @yjernite who previously answered to an issue about KILT and TriviaQA :)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2391/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2391/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2390 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2390/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2390/comments | https://api.github.com/repos/huggingface/datasets/issues/2390/events | https://github.com/huggingface/datasets/pull/2390 | 897,903,642 | MDExOlB1bGxSZXF1ZXN0NjQ5ODQ0NjQ2 | 2,390 | Add check for task templates on dataset load | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"LGTM now, thank you =)"
] | 1,621,592,217,000 | 1,621,612,149,000 | 1,621,612,146,000 | MEMBER | null | This PR adds a check that the features of a dataset match the schema of each compatible task template. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2390/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2390/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2390",
"html_url": "https://github.com/huggingface/datasets/pull/2390",
"diff_url": "https://github.com/huggingface/datasets/pull/2390.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2390.patch",
"merged_at": 1621612146000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2389/comments | https://api.github.com/repos/huggingface/datasets/issues/2389/events | https://github.com/huggingface/datasets/pull/2389 | 897,822,270 | MDExOlB1bGxSZXF1ZXN0NjQ5Nzc3MDMz | 2,389 | Insert task templates for text classification | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Update: found a few datasets that slipped through the net. Adding them shortly!",
"You might have thought about this already, but would it make sense to use the `datasets.features.ClassLabel` values when possible instead of declaring the list once for the `feature` and once for the `template`?",
"> You might have thought about this already, but would it make sense to use the `datasets.features.ClassLabel` values when possible instead of declaring the list once for the `feature` and once for the `template`?\r\n\r\nhi @yjernite, these code insertions are auto-generated so could certainly be improved :) \r\n\r\njust so i understand, your idea is that instead of doing something like\r\n\r\n```python\r\nclass AGNews(datasets.GeneratorBasedBuilder):\r\n \"\"\"AG News topic classification dataset.\"\"\"\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"text\": datasets.Value(\"string\"),\r\n \"label\": datasets.features.ClassLabel(\r\n names=[\"World\", \"Sports\", \"Business\", \"Sci/Tech\"]\r\n ),\r\n }\r\n ),\r\n homepage=\"http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html\",\r\n citation=_CITATION,\r\n task_templates=[\r\n TextClassification(\r\n labels=(\"Business\", \"Sci/Tech\", \"Sports\", \"World\"),\r\n text_column=\"text\",\r\n label_column=\"label\",\r\n )\r\n ],\r\n )\r\n```\r\n\r\nwe could do the following:\r\n\r\n```python\r\nclass AGNews(datasets.GeneratorBasedBuilder):\r\n \"\"\"AG News topic classification dataset.\"\"\"\r\n\r\n def _info(self):\r\n info = datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"text\": datasets.Value(\"string\"),\r\n \"label\": datasets.features.ClassLabel(\r\n names=[\"World\", \"Sports\", \"Business\", \"Sci/Tech\"]\r\n ),\r\n }\r\n ),\r\n homepage=\"http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html\",\r\n citation=_CITATION,\r\n )\r\n\r\n info.task_templates = [\r\n TextClassification(\r\n labels=info.features.names,\r\n text_column=\"text\",\r\n label_column=\"label\",\r\n )\r\n ]\r\n return info\r\n```\r\n\r\n",
"Or we could simply not specify the labels and update the template in the DatasetInfo postinit to give it the labels ?",
"> Or we could simply not specify the labels and update the template in the DatasetInfo postinit to give it the labels ?\r\n\r\nOh yes, that would be great! It does mean enforcing that people use the right feature type (sometimes people still use a `string` feature still because they don't want to enumerate the classes, but I guess you've been catching most of those in reviews @lhoestq )\r\n\r\nThere might be reasons where there should be a legitimate difference, but I can't really think of nay right now, and we can always duplicate the feature",
"Let's ignore the CI fails since they are unrelated to your changes. They're about dataset cards issues"
] | 1,621,586,186,000 | 1,622,215,738,000 | 1,622,215,588,000 | MEMBER | null | This PR inserts text-classification templates for datasets with the following properties:
* Only one config
* At most two features of `(Value, ClassLabel)` type
Note that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2389/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2389",
"html_url": "https://github.com/huggingface/datasets/pull/2389",
"diff_url": "https://github.com/huggingface/datasets/pull/2389.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2389.patch",
"merged_at": 1622215588000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2388/comments | https://api.github.com/repos/huggingface/datasets/issues/2388/events | https://github.com/huggingface/datasets/issues/2388 | 897,767,470 | MDU6SXNzdWU4OTc3Njc0NzA= | 2,388 | Incorrect URLs for some datasets | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,621,581,755,000 | 1,622,828,385,000 | 1,622,828,385,000 | MEMBER | null | ## Describe the bug
It seems that the URLs for the following datasets are invalid:
- [ ] `bn_hate_speech` has been renamed: https://github.com/rezacsedu/Bengali-Hate-Speech-Dataset/commit/c67ecfc4184911e12814f6b36901f9828df8a63a
- [ ] `covid_tweets_japanese` has been renamed: http://www.db.info.gifu-u.ac.jp/covid-19-twitter-dataset/
As a result we can no longer load these datasets using `load_dataset`. The simple fix is to rename the URL in the dataset script - will do this asap.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# pick one of the datasets from the list above
ds = load_dataset("bn_hate_speech")
```
## Expected results
Dataset loads without error.
## Actual results
```
Downloading: 3.36kB [00:00, 1.07MB/s]
Downloading: 2.03kB [00:00, 678kB/s]
Using custom data configuration default
Downloading and preparing dataset bn_hate_speech/default (download: 951.48 KiB, generated: 949.84 KiB, post-processed: Unknown size, total: 1.86 MiB) to /Users/lewtun/.cache/huggingface/datasets/bn_hate_speech/default/0.0.0/a2dc726e511a2177523301bcad196af05d4d8a2cff30d2769ba8aacc1f5fdb5c...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/load.py", line 744, in load_dataset
builder_instance.download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/builder.py", line 574, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/builder.py", line 630, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/lewtun/.cache/huggingface/modules/datasets_modules/datasets/bn_hate_speech/a2dc726e511a2177523301bcad196af05d4d8a2cff30d2769ba8aacc1f5fdb5c/bn_hate_speech.py", line 76, in _split_generators
train_path = dl_manager.download_and_extract(_URL)
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 287, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 195, in download
downloaded_path_or_paths = map_nested(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 195, in map_nested
return function(data_struct)
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 218, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 281, in cached_path
output_path = get_from_cache(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/rezacsedu/Bengali-Hate-Speech-Dataset/main/Bengali_%20Hate_Speech_Dataset_Subset.csv
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.8
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2388/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2387/comments | https://api.github.com/repos/huggingface/datasets/issues/2387/events | https://github.com/huggingface/datasets/issues/2387 | 897,566,666 | MDU6SXNzdWU4OTc1NjY2NjY= | 2,387 | datasets 1.6 ignores cache | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Looks like there are multiple issues regarding this (#2386, #2322) and it's a WIP #2329. Currently these datasets are being loaded in-memory which is causing this issue. Quoting @mariosasko here for a quick fix:\r\n\r\n> set `keep_in_memory` to `False` when loading a dataset (`sst = load_dataset(\"sst\", keep_in_memory=False)`) to prevent it from loading in-memory. Currently, in-memory datasets fail to find cached files due to this check (always False for them)\r\n\r\n",
"Hi ! Since `datasets` 1.6.0 we no longer keep small datasets (<250MB) on disk and load them in RAM instead by default. This makes data processing and iterating on data faster. However datasets in RAM currently have no way to reload previous results from the cache (since nothing is written on disk). We are working on making the caching work for datasets in RAM.\r\n\r\nUntil then, I'd recommend passing `keep_in_memory=False` to the calls to `load_dataset` like here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/223943872e8c9c3fc11db3c6e93da07f5177423f/examples/pytorch/language-modeling/run_clm.py#L233\r\n\r\nThis way you say explicitly that you want your dataset to stay on the disk, and it will be able to recover previously computed results from the cache.",
"gotcha! thanks Quentin",
"OK, It doesn't look like we can use the proposed workaround - see https://github.com/huggingface/transformers/issues/11801\r\n\r\nCould you please add an env var for us to be able to turn off this unwanted in our situation behavior? It is really problematic for dev work, when one needs to restart the training very often and needs a quick startup time. Manual editing of standard scripts is not a practical option when one uses examples.\r\n\r\nThis could also be a problem for tests, which will be slower because of lack of cache, albeit usually we use tiny datasets there. I think we want caching for tests.\r\n\r\nThank you.",
"Hi @stas00, \r\n\r\nYou are right: an env variable is needed to turn off this behavior. I am adding it.\r\n\r\nFor the moment there is a config parameter to turn off this behavior: `datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES = None`\r\n\r\nYou can find this info in the docs:\r\n- in the docstring of the parameter `keep_in_memory` of the function [`load_datasets`](https://huggingface.co/docs/datasets/package_reference/loading_methods.html#datasets.load_dataset):\r\n- in a Note in the docs about [Loading a Dataset](https://huggingface.co/docs/datasets/loading_datasets.html#from-the-huggingface-hub)\r\n\r\n> The default in ๐คDatasets is to memory-map the dataset on drive if its size is larger than datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES (default 250 MiB); otherwise, the dataset is copied in-memory. This behavior can be disabled by setting datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES = None, and in this case the dataset is not loaded in memory.",
"Yes, but this still requires one to edit the standard example scripts, so if I'm doing that already I just as well can add `keep_in_memory=False`.\r\n\r\nMay be the low hanging fruit is to add `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES` env var to match the config, and if the user sets it to 0, then it'll be the same as `keep_in_memory=False` or `datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=0`?",
"@stas00, however, for the moment, setting the value to `0` is equivalent to the opposite, i.e. `keep_in_memory=True`. This means the max size until which I load in memory is 0 bytes.\r\n\r\nTell me if this is logical/convenient, or I should change it.",
"In my PR, to turn off current default bahavior, you should set env variable to one of: `{\"\", \"OFF\", \"NO\", \"FALSE\"}`.\r\n\r\nFor example:\r\n```\r\nMAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=\r\n```",
"IMHO, this behaviour is not very intuitive, as 0 is a normal quantity of bytes. So `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=0` to me reads as don't cache ever.\r\n\r\nAlso \"SIZE_IN_BYTES\" that can take one of `{\"\", \"OFF\", \"NO\", \"FALSE\"}` is also quite odd.\r\n\r\nI think supporting a very simple `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES` that can accept any numerical value to match the name of the variable, requires minimal logic and is very straightforward. \r\n\r\nSo if you could adjust this logic - then `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=0` is all that's needed to not do in-memory datasets.\r\n\r\nDoes it make sense?",
"I understand your point @stas00, as I am not very convinced with current implementation.\r\n\r\nMy concern is: which numerical value should then pass a user who wants `keep_in_memory=True` by default, independently of dataset size? Currently it is `0` for this case.",
"That's a good question, and again the normal bytes can be used for that:\r\n```\r\nMAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=1e12 # (~2**40)\r\n```\r\nSince it's unlikely that anybody will have more than 1TB RAM.\r\n\r\nIt's also silly that it uses BYTES and not MBYTES - that level of refinement doesn't seem to be of a practical use in this context.\r\n\r\nNot sure when it was added and if there are back-compat issues here, but perhaps it could be renamed `MAX_IN_MEMORY_DATASET_SIZE` and support 1M, 1G, 1T, etc. \r\n\r\nBut scientific notation is quite intuitive too, as each 000 zeros is the next M, G, T multiplier. Minus the discrepancy of 1024 vs 1000, which adds up. And it is easy to write down `1e12`, as compared to `1099511627776` (2**40). (`1.1e12` is more exact).\r\n",
"Great! Thanks, @stas00.\r\n\r\nI am implementing your suggestion to turn off default value when set to `0`.\r\n\r\nFor the other suggestion (allowing different metric prefixes), I will discuss with @lhoestq to agree on its implementation.",
"Awesome! Thank you, @albertvillanova!!!\r\n\r\n"
] | 1,621,555,978,000 | 1,622,045,274,000 | 1,622,045,274,000 | MEMBER | null | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b/cache-c6aefe81ca4e5152.arrow'}], 'validation': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b/cache-97cf4c813e6469c6.arrow'}]}`
>
> while the same command with the latest version of datasets (actually starting at `1.6.0`) gives:
> > `{'train': [], 'validation': []}`
>
I also confirm that downgrading to `datasets==1.5.0` makes things fast again - i.e. cache is used.
to reproduce:
```
USE_TF=0 python examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path gpt2 \
--dataset_name "stas/openwebtext-10k" \
--output_dir output_dir \
--overwrite_output_dir \
--do_train \
--do_eval \
--max_train_samples 1000 \
--max_eval_samples 200 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--num_train_epochs 1 \
--warmup_steps 8 \
--block_size 64 \
--fp16 \
--report_to none
```
the first time the startup is slow and some 5 tqdm bars. It shouldn't do it on consequent runs. but with `datasets>1.5.0` it rebuilds on every run.
@lhoestq
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2387/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2386/comments | https://api.github.com/repos/huggingface/datasets/issues/2386/events | https://github.com/huggingface/datasets/issues/2386 | 897,560,049 | MDU6SXNzdWU4OTc1NjAwNDk= | 2,386 | Accessing Arrow dataset cache_files | {
"login": "Mehrad0711",
"id": 28717374,
"node_id": "MDQ6VXNlcjI4NzE3Mzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/28717374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehrad0711",
"html_url": "https://github.com/Mehrad0711",
"followers_url": "https://api.github.com/users/Mehrad0711/followers",
"following_url": "https://api.github.com/users/Mehrad0711/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehrad0711/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehrad0711/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehrad0711/subscriptions",
"organizations_url": "https://api.github.com/users/Mehrad0711/orgs",
"repos_url": "https://api.github.com/users/Mehrad0711/repos",
"events_url": "https://api.github.com/users/Mehrad0711/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehrad0711/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks @bhavitvyamalik for referencing the workaround. Setting `keep_in_memory=False` is working."
] | 1,621,555,063,000 | 1,621,624,683,000 | 1,621,624,683,000 | NONE | null | ## Describe the bug
In datasets 1.5.0 the following code snippet would have printed the cache_files:
```
train_data = load_dataset('conll2003', split='train', cache_dir='data')
print(train_data.cache_files[0]['filename'])
```
However, in the newest release (1.6.1), it prints an empty list.
I also tried loading the dataset with `keep_in_memory=True` argument but still `cache_files` is empty.
Was wondering if this is a bug or I need to pass additional arguments so I can access the cache_files.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2386/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2385 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2385/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2385/comments | https://api.github.com/repos/huggingface/datasets/issues/2385/events | https://github.com/huggingface/datasets/pull/2385 | 897,206,823 | MDExOlB1bGxSZXF1ZXN0NjQ5MjM1Mjcy | 2,385 | update citations | {
"login": "adeepH",
"id": 46108405,
"node_id": "MDQ6VXNlcjQ2MTA4NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/46108405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adeepH",
"html_url": "https://github.com/adeepH",
"followers_url": "https://api.github.com/users/adeepH/followers",
"following_url": "https://api.github.com/users/adeepH/following{/other_user}",
"gists_url": "https://api.github.com/users/adeepH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adeepH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adeepH/subscriptions",
"organizations_url": "https://api.github.com/users/adeepH/orgs",
"repos_url": "https://api.github.com/users/adeepH/repos",
"events_url": "https://api.github.com/users/adeepH/events{/privacy}",
"received_events_url": "https://api.github.com/users/adeepH/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,621,533,248,000 | 1,621,600,698,000 | 1,621,600,698,000 | CONTRIBUTOR | null | To update citations for [Offenseval_dravidiain](https://huggingface.co/datasets/offenseval_dravidian)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2385/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2385",
"html_url": "https://github.com/huggingface/datasets/pull/2385",
"diff_url": "https://github.com/huggingface/datasets/pull/2385.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2385.patch",
"merged_at": 1621600698000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2384 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2384/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2384/comments | https://api.github.com/repos/huggingface/datasets/issues/2384/events | https://github.com/huggingface/datasets/pull/2384 | 896,866,461 | MDExOlB1bGxSZXF1ZXN0NjQ4OTI4NTQ0 | 2,384 | Add args description to DatasetInfo | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for the suggestions! I've included them and made a few minor tweaks along the way",
"Please merge master into this branch to fix the CI, I just fixed metadata validation tests."
] | 1,621,518,790,000 | 1,621,675,576,000 | 1,621,675,574,000 | MEMBER | null | Closes #2354
I am not sure what `post_processed` and `post_processing_size` correspond to, so have left them empty for now. I also took a guess at some of the other fields like `dataset_size` vs `size_in_bytes`, so might have misunderstood their meaning. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2384/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2384",
"html_url": "https://github.com/huggingface/datasets/pull/2384",
"diff_url": "https://github.com/huggingface/datasets/pull/2384.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2384.patch",
"merged_at": 1621675573000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2383 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2383/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2383/comments | https://api.github.com/repos/huggingface/datasets/issues/2383/events | https://github.com/huggingface/datasets/pull/2383 | 895,779,723 | MDExOlB1bGxSZXF1ZXN0NjQ3OTU4MTQ0 | 2,383 | Improve example in rounding docs | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,621,450,763,000 | 1,621,601,602,000 | 1,621,600,589,000 | CONTRIBUTOR | null | Improves the example in the rounding subsection of the Split API docs. With this change, it should more clear what's the difference between the `closest` and the `pct1_dropremainder` rounding. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2383/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2383",
"html_url": "https://github.com/huggingface/datasets/pull/2383",
"diff_url": "https://github.com/huggingface/datasets/pull/2383.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2383.patch",
"merged_at": 1621600589000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2382 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2382/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2382/comments | https://api.github.com/repos/huggingface/datasets/issues/2382/events | https://github.com/huggingface/datasets/issues/2382 | 895,610,216 | MDU6SXNzdWU4OTU2MTAyMTY= | 2,382 | DuplicatedKeysError: FAILURE TO GENERATE DATASET ! load_dataset('head_qa', 'en') | {
"login": "helloworld123-lab",
"id": 75953751,
"node_id": "MDQ6VXNlcjc1OTUzNzUx",
"avatar_url": "https://avatars.githubusercontent.com/u/75953751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/helloworld123-lab",
"html_url": "https://github.com/helloworld123-lab",
"followers_url": "https://api.github.com/users/helloworld123-lab/followers",
"following_url": "https://api.github.com/users/helloworld123-lab/following{/other_user}",
"gists_url": "https://api.github.com/users/helloworld123-lab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/helloworld123-lab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/helloworld123-lab/subscriptions",
"organizations_url": "https://api.github.com/users/helloworld123-lab/orgs",
"repos_url": "https://api.github.com/users/helloworld123-lab/repos",
"events_url": "https://api.github.com/users/helloworld123-lab/events{/privacy}",
"received_events_url": "https://api.github.com/users/helloworld123-lab/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,621,439,388,000 | 1,622,381,176,000 | 1,622,381,176,000 | NONE | null | Hello everyone,
I try to use head_qa dataset in [https://huggingface.co/datasets/viewer/?dataset=head_qa&config=en](url)
```
!pip install datasets
from datasets import load_dataset
dataset = load_dataset(
'head_qa', 'en')
```
When I write above load_dataset(.), it throws the following:
```
DuplicatedKeysError Traceback (most recent call last)
<ipython-input-6-ea87002d32f0> in <module>()
2 from datasets import load_dataset
3 dataset = load_dataset(
----> 4 'head_qa', 'en')
5 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self)
347 for hash, key in self.hkey_record:
348 if hash in tmp_record:
--> 349 raise DuplicatedKeysError(key)
350 else:
351 tmp_record.add(hash)
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 1
Keys should be unique and deterministic in nature
```
How can I fix the error? Thanks
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2382/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2381 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2381/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2381/comments | https://api.github.com/repos/huggingface/datasets/issues/2381/events | https://github.com/huggingface/datasets/pull/2381 | 895,588,844 | MDExOlB1bGxSZXF1ZXN0NjQ3NzkyNDcw | 2,381 | add dataset card title | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,621,438,203,000 | 1,621,536,700,000 | 1,621,536,700,000 | CONTRIBUTOR | null | few of them were missed by me earlier which I've added now | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2381/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2381",
"html_url": "https://github.com/huggingface/datasets/pull/2381",
"diff_url": "https://github.com/huggingface/datasets/pull/2381.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2381.patch",
"merged_at": 1621536700000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2380 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2380/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2380/comments | https://api.github.com/repos/huggingface/datasets/issues/2380/events | https://github.com/huggingface/datasets/pull/2380 | 895,367,201 | MDExOlB1bGxSZXF1ZXN0NjQ3NTk3NTc3 | 2,380 | maintain YAML structure reading from README | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,621,426,327,000 | 1,621,429,718,000 | 1,621,429,718,000 | CONTRIBUTOR | null | How YAML used be loaded earlier in the string (structure of YAML was affected because of this and YAML for datasets with multiple configs was not being loaded correctly):
```
annotations_creators:
labeled_final:
- expert-generated
labeled_swap:
- expert-generated
unlabeled_final:
- machine-generated
language_creators:
- machine-generated
languages:
- en
licenses:
- other
multilinguality:
- monolingual
size_categories:
labeled_final:
- 10K<n<100K
labeled_swap:
- 10K<n<100K
unlabeled_final:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
- text-scoring
task_ids:
- semantic-similarity-classification
- semantic-similarity-scoring
- text-scoring-other-paraphrase-identification
```
How YAML is loaded in string now:
```
annotations_creators:
labeled_final:
- expert-generated
labeled_swap:
- expert-generated
unlabeled_final:
- machine-generated
language_creators:
- machine-generated
languages:
- en
licenses:
- other
multilinguality:
- monolingual
size_categories:
labeled_final:
- 10K<n<100K
labeled_swap:
- 10K<n<100K
unlabeled_final:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
- text-scoring
task_ids:
- semantic-similarity-classification
- semantic-similarity-scoring
- text-scoring-other-paraphrase-identification
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2380/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2380",
"html_url": "https://github.com/huggingface/datasets/pull/2380",
"diff_url": "https://github.com/huggingface/datasets/pull/2380.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2380.patch",
"merged_at": 1621429718000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2379 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2379/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2379/comments | https://api.github.com/repos/huggingface/datasets/issues/2379/events | https://github.com/huggingface/datasets/pull/2379 | 895,252,597 | MDExOlB1bGxSZXF1ZXN0NjQ3NDk2ODUx | 2,379 | Disallow duplicate keys in yaml tags | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,621,419,007,000 | 1,621,421,132,000 | 1,621,421,131,000 | MEMBER | null | Make sure that there's no duplidate keys in yaml tags.
I added the check in the yaml tree constructor's method, so that the verification is done at every level in the yaml structure.
cc @julien-c | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2379/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2379",
"html_url": "https://github.com/huggingface/datasets/pull/2379",
"diff_url": "https://github.com/huggingface/datasets/pull/2379.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2379.patch",
"merged_at": 1621421131000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2378 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2378/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2378/comments | https://api.github.com/repos/huggingface/datasets/issues/2378/events | https://github.com/huggingface/datasets/issues/2378 | 895,131,774 | MDU6SXNzdWU4OTUxMzE3NzQ= | 2,378 | Add missing dataset_infos.json files | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,621,411,872,000 | 1,621,411,872,000 | null | MEMBER | null | Some of the datasets in `datasets` are missing a `dataset_infos.json` file, e.g.
```
[PosixPath('datasets/chr_en/chr_en.py'), PosixPath('datasets/chr_en/README.md')]
[PosixPath('datasets/telugu_books/README.md'), PosixPath('datasets/telugu_books/telugu_books.py')]
[PosixPath('datasets/reclor/README.md'), PosixPath('datasets/reclor/reclor.py')]
[PosixPath('datasets/json/README.md')]
[PosixPath('datasets/csv/README.md')]
[PosixPath('datasets/wikihow/wikihow.py'), PosixPath('datasets/wikihow/README.md')]
[PosixPath('datasets/c4/c4.py'), PosixPath('datasets/c4/README.md')]
[PosixPath('datasets/text/README.md')]
[PosixPath('datasets/lm1b/README.md'), PosixPath('datasets/lm1b/lm1b.py')]
[PosixPath('datasets/pandas/README.md')]
```
For `json`, `text`, csv`, and `pandas` this is expected, but not for the others which should be fixed
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2378/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2377/comments | https://api.github.com/repos/huggingface/datasets/issues/2377/events | https://github.com/huggingface/datasets/issues/2377 | 894,918,927 | MDU6SXNzdWU4OTQ5MTg5Mjc= | 2,377 | ArrowDataset.save_to_disk produces files that cannot be read using pyarrow.feather | {
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! This is because we are actually using the arrow streaming format. We plan to switch to the arrow IPC format.\r\nMore info at #1933 "
] | 1,621,389,877,000 | 1,622,803,151,000 | null | NONE | null | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from pyarrow import feather
dataset = load_dataset('imdb', split='train')
dataset.save_to_disk('dataset_dir')
table = feather.read_table('dataset_dir/dataset.arrow')
```
## Expected results
I expect that the saved dataset can be read by the official Apache Arrow methods.
## Actual results
```
File "/usr/local/lib/python3.7/site-packages/pyarrow/feather.py", line 236, in read_table
reader.open(source, use_memory_map=memory_map)
File "pyarrow/feather.pxi", line 67, in pyarrow.lib.FeatherReader.open
File "pyarrow/error.pxi", line 123, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Not a Feather V1 or Arrow IPC file
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets-1.6.2
- Platform: Linux
- Python version: 3.7
- PyArrow version: 0.17.1, also 2.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2377/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2376 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2376/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2376/comments | https://api.github.com/repos/huggingface/datasets/issues/2376/events | https://github.com/huggingface/datasets/pull/2376 | 894,852,264 | MDExOlB1bGxSZXF1ZXN0NjQ3MTU1NDE4 | 2,376 | Improve task api code quality | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks good thanks, what do you think @lewtun ?",
"thanks for including the lazy `ClassLabel` class @mariosasko ! from my side this LGTM!"
] | 1,621,379,620,000 | 1,622,666,397,000 | 1,621,956,654,000 | CONTRIBUTOR | null | Improves the code quality of the `TaskTemplate` dataclasses.
Changes:
* replaces `return NotImplemented` with raise `NotImplementedError`
* replaces `sorted` with `len` in the uniqueness check
* defines `label2id` and `id2label` in the `TextClassification` template as properties
* replaces the `object.__setattr__(self, attr, value)` syntax with (IMO nicer) `self.__dict__[attr] = value` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2376/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2376",
"html_url": "https://github.com/huggingface/datasets/pull/2376",
"diff_url": "https://github.com/huggingface/datasets/pull/2376.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2376.patch",
"merged_at": 1621956654000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2375 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2375/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2375/comments | https://api.github.com/repos/huggingface/datasets/issues/2375/events | https://github.com/huggingface/datasets/pull/2375 | 894,655,157 | MDExOlB1bGxSZXF1ZXN0NjQ2OTg2NTcw | 2,375 | Dataset Streaming | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,621,362,000,000 | 1,624,466,102,000 | 1,624,466,101,000 | MEMBER | null | # Dataset Streaming
## API
Current API is
```python
from datasets import load_dataset
# Load an IterableDataset without downloading data
snli = load_dataset("snli", streaming=True)
# Access examples by streaming data
print(next(iter(snli["train"])))
# {'premise': 'A person on a horse jumps over a broken down airplane.',
# 'hypothesis': 'A person is training his horse for a competition.',
# 'label': 1}
```
I already implemented a few methods:
- IterableDataset.map: apply transforms on-the-fly to the examples
- IterableDataset.shuffle: shuffle the data _a la_ TFDS, i.e. with a shuffling buffer
- IterableDataset.with_format: set the format to `"torch"` to get a `torch.utils.data.IterableDataset`
- merge_datasets: merge two iterable datasets by alternating one or the other (you can specify the probabilities)
I would love to have your opinion on the API design :)
## Implementation details
### Streaming
Data streaming is done using `fsspec` which has nice caching features.
To make dataset streaming work I extend the `open` function of dataset scripts to support opening remote files without downloading them entirely. It also works with remote compressed archives (currently only zip is supported):
```python
# Get a file-like object by streaming data from a remote file
open("https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt")
# Get a file-like object by streaming data from a remote compressed archive by using the hop separator "::"
open("zip://snli_1.0_train.txt::https://nlp.stanford.edu/projects/snli/snli_1.0.zip")
```
I also extend the `os.path.join` function to support navigation in remote compressed archives, since it has to deal with the `"::"` separator. This separator is used by `fsspec`.
Finally I also added a retry mechanism in case the connection fails during data streaming.
### Transforms
An IterableDataset wraps an ExamplesIterable instance. There are different subclasses depending on the transforms we want to apply:
- ExamplesIterable: the basic one
- MappedExamplesIterable: an iterable with a `map` function applied on the fly
- BufferShuffledExamplesIterable: an iterable with a shuffling buffer
- CyclingMultiSourcesExamplesIterable: alternates between several ExamplesIterable
- RandomlyCyclingMultiSourcesExamplesIterable: randomly alternates between several ExamplesIterable
### DatasetBuilder
I use the same builders as usual. I just added a new method `_get_examples_iterable_for_split` to get an ExamplesIterable for a given split. Currently only the GeneratorBasedBuilder and the ArrowBasedBuilder implement it.
The BeamBasedBuilder doesn't implement it yet.
It means that datasets like wikipedia and natural_questions can't be loaded as IterableDataset for now.
## Other details
<S>I may have to do some changes in many dataset script to use `download` instead of `download_and_extract` when extraction is not needed. This will avoid errors for streaming.</s>
EDIT: Actually I just check for the extension of the file to do extraction only if needed.
EDIT2: It's not possible to stream from .tar.gz files without downloading the file completely. For now I raise an error if one want to get a streaming dataset based on .tar.gz files.
## TODO
usual stuff:
- [x] make streaming dependency "aiohttp" optional: `pip install datasets[streaming]`
- [x] tests
- [x] docs | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2375/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 6,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2375/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2375",
"html_url": "https://github.com/huggingface/datasets/pull/2375",
"diff_url": "https://github.com/huggingface/datasets/pull/2375.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2375.patch",
"merged_at": 1624466101000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2374 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2374/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2374/comments | https://api.github.com/repos/huggingface/datasets/issues/2374/events | https://github.com/huggingface/datasets/pull/2374 | 894,579,364 | MDExOlB1bGxSZXF1ZXN0NjQ2OTIyMjkw | 2,374 | add `desc` to `tqdm` in `Dataset.map()` | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Once this is merged, let's update `transformers` examples to use this new code. As currently all those tqdm bars are who knows what they are....\r\n\r\nhttps://github.com/huggingface/transformers/issues/11797",
"Sure @stas00! Once this is merged let's discuss what all changes can be done on `transformers` side",
"@bhavitvyamalik, as it has been merged would you like to tackle https://github.com/huggingface/transformers/issues/11797?\r\n",
"Definitely @stas00. From what I could gather, you guys want more meaningful `.map` calls for all examples [here](https://github.com/huggingface/transformers/tree/master/examples/pytorch)?",
"That's exactly right, @bhavitvyamalik \r\n\r\nPerhaps the best approach is to do one example, see that other maintainers agree on it. and then replicate to other."
] | 1,621,356,269,000 | 1,622,130,244,000 | 1,622,041,161,000 | CONTRIBUTOR | null | Fixes #2330. Please let me know if anything is also required in this | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2374/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2374/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2374",
"html_url": "https://github.com/huggingface/datasets/pull/2374",
"diff_url": "https://github.com/huggingface/datasets/pull/2374.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2374.patch",
"merged_at": 1622041161000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2373 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2373/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2373/comments | https://api.github.com/repos/huggingface/datasets/issues/2373/events | https://github.com/huggingface/datasets/issues/2373 | 894,499,909 | MDU6SXNzdWU4OTQ0OTk5MDk= | 2,373 | Loading dataset from local path | {
"login": "kolakows",
"id": 34172905,
"node_id": "MDQ6VXNlcjM0MTcyOTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/34172905?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kolakows",
"html_url": "https://github.com/kolakows",
"followers_url": "https://api.github.com/users/kolakows/followers",
"following_url": "https://api.github.com/users/kolakows/following{/other_user}",
"gists_url": "https://api.github.com/users/kolakows/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kolakows/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kolakows/subscriptions",
"organizations_url": "https://api.github.com/users/kolakows/orgs",
"repos_url": "https://api.github.com/users/kolakows/repos",
"events_url": "https://api.github.com/users/kolakows/events{/privacy}",
"received_events_url": "https://api.github.com/users/kolakows/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Version below works, checked again in the docs, and data_files should be a path.\r\n```\r\nds = datasets.load_dataset('my_script.py', \r\n data_files='/data/dir/corpus.txt', \r\n cache_dir='.')\r\n```"
] | 1,621,351,250,000 | 1,621,352,196,000 | 1,621,352,195,000 | NONE | null | I'm trying to load a local dataset with the code below
```
ds = datasets.load_dataset('my_script.py',
data_files='corpus.txt',
data_dir='/data/dir',
cache_dir='.')
```
But internally a BuilderConfig is created, which tries to use getmtime on the data_files string, without using data_dir. Is this a bug or am I not using the load_dataset correctly?
https://github.com/huggingface/datasets/blob/bc61954083f74e6460688202e9f77dde2475319c/src/datasets/builder.py#L153 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2373/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2372 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2372/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2372/comments | https://api.github.com/repos/huggingface/datasets/issues/2372/events | https://github.com/huggingface/datasets/pull/2372 | 894,496,064 | MDExOlB1bGxSZXF1ZXN0NjQ2ODUxODc2 | 2,372 | ConvQuestions benchmark added | {
"login": "PhilippChr",
"id": 24608689,
"node_id": "MDQ6VXNlcjI0NjA4Njg5",
"avatar_url": "https://avatars.githubusercontent.com/u/24608689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilippChr",
"html_url": "https://github.com/PhilippChr",
"followers_url": "https://api.github.com/users/PhilippChr/followers",
"following_url": "https://api.github.com/users/PhilippChr/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilippChr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilippChr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilippChr/subscriptions",
"organizations_url": "https://api.github.com/users/PhilippChr/orgs",
"repos_url": "https://api.github.com/users/PhilippChr/repos",
"events_url": "https://api.github.com/users/PhilippChr/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilippChr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for your helpful comments and suggestions! :)\r\nI integrated the additional fields, and extended some of the README/dataset card.\r\nAnd I actually realized that we had the cc-by-4.0 for the dataset, so this was also changed.",
"I added the answers to the test set actually :)",
"Oh great ! Let me revert my change then"
] | 1,621,351,010,000 | 1,622,025,105,000 | 1,622,025,105,000 | CONTRIBUTOR | null | Hello,
I would like to integrate our dataset on conversational QA. The answers are grounded in the KG.
The work was published in CIKM 2019 (https://dl.acm.org/doi/10.1145/3357384.3358016).
We hope for further research on how to deal with the challenges of factoid conversational QA.
Thanks! :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2372/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2372/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2372",
"html_url": "https://github.com/huggingface/datasets/pull/2372",
"diff_url": "https://github.com/huggingface/datasets/pull/2372.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2372.patch",
"merged_at": 1622025105000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2371 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2371/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2371/comments | https://api.github.com/repos/huggingface/datasets/issues/2371/events | https://github.com/huggingface/datasets/issues/2371 | 894,193,403 | MDU6SXNzdWU4OTQxOTM0MDM= | 2,371 | Align question answering tasks with sub-domains | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,621,331,279,000 | 1,621,331,362,000 | null | MEMBER | null | As pointed out by @thomwolf in #2255 we should consider breaking with the pipeline taxonomy of `transformers` to account for the various types of question-answering domains:
> `question-answering` exists in two forms: abstractive and extractive question answering.
>
> we can keep a generic `question-answering` but then it will probably mean diferrent schema of input/output for both (abstractive will have text for both while extractive can use spans indication as well as text).
>
> Or we can also propose to use `abstractive-question-answering` and `extractive-question-answering` for instance.
> Maybe we could have `question-answering-abstractive` and `question-answering-extractive` if somehow we can use a for a completion or search in the future (detail).
> Actually I see that people are more organizing in terms of general and sub-tasks, for instance on paperwithcode: https://paperswithcode.com/area/natural-language-processing and on nlpprogress: https://github.com/sebastianruder/NLP-progress/blob/master/english/question_answering.md#squad
>
> Probably the best is to align with one of these in terms of denomination, PaperWithCode is probably the most active and maintained and we work with them as well.
> Maybe you want to check with a few QA datasets that this schema make sense. Typically NaturalQuestions, TriviaQA and can be good second datasets to compare to and be sure of the generality of the schema.
>
> A good recent list of QA datasets to compare the schemas among, is for instance in the UnitedQA paper: https://arxiv.org/abs/2101.00178
Investigate which grouping of QA is best suited for `datasets` and adapt / extend the QA task template accordingly. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2371/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2370 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2370/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2370/comments | https://api.github.com/repos/huggingface/datasets/issues/2370/events | https://github.com/huggingface/datasets/pull/2370 | 893,606,432 | MDExOlB1bGxSZXF1ZXN0NjQ2MDkyNDQy | 2,370 | Adding HendrycksTest dataset | {
"login": "andyzoujm",
"id": 43451571,
"node_id": "MDQ6VXNlcjQzNDUxNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/43451571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andyzoujm",
"html_url": "https://github.com/andyzoujm",
"followers_url": "https://api.github.com/users/andyzoujm/followers",
"following_url": "https://api.github.com/users/andyzoujm/following{/other_user}",
"gists_url": "https://api.github.com/users/andyzoujm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andyzoujm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andyzoujm/subscriptions",
"organizations_url": "https://api.github.com/users/andyzoujm/orgs",
"repos_url": "https://api.github.com/users/andyzoujm/repos",
"events_url": "https://api.github.com/users/andyzoujm/events{/privacy}",
"received_events_url": "https://api.github.com/users/andyzoujm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq Thank you for the review. I've made the suggested changes. There still might be some problems with dummy data though due to some csv loading issues (which I haven't found the cause to).",
"I took a look at the dummy data and some csv lines were cropped. I fixed them :)"
] | 1,621,277,585,000 | 1,622,479,033,000 | 1,622,479,033,000 | CONTRIBUTOR | null | Adding Hendrycks test from https://arxiv.org/abs/2009.03300.
I'm having a bit of trouble with dummy data creation because some lines in the csv files aren't being loaded properly (only the first entry loaded in a row of length 6). The dataset is loading just fine. Hope you can kindly help!
Thank you! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2370/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2370",
"html_url": "https://github.com/huggingface/datasets/pull/2370",
"diff_url": "https://github.com/huggingface/datasets/pull/2370.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2370.patch",
"merged_at": 1622479033000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2369 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2369/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2369/comments | https://api.github.com/repos/huggingface/datasets/issues/2369/events | https://github.com/huggingface/datasets/pull/2369 | 893,554,153 | MDExOlB1bGxSZXF1ZXN0NjQ2MDQ5NDM1 | 2,369 | correct labels of conll2003 | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,621,273,074,000 | 1,621,326,462,000 | 1,621,326,462,000 | MEMBER | null | # What does this PR
It fixes/extends the `ner_tags` for conll2003 to include all.
Paper reference https://arxiv.org/pdf/cs/0306050v1.pdf
Model reference https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/blob/main/config.json
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2369/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2369",
"html_url": "https://github.com/huggingface/datasets/pull/2369",
"diff_url": "https://github.com/huggingface/datasets/pull/2369.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2369.patch",
"merged_at": 1621326462000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2368/comments | https://api.github.com/repos/huggingface/datasets/issues/2368/events | https://github.com/huggingface/datasets/pull/2368 | 893,411,076 | MDExOlB1bGxSZXF1ZXN0NjQ1OTI5NzM0 | 2,368 | Allow "other-X" in licenses | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,621,262,874,000 | 1,621,269,387,000 | 1,621,269,387,000 | CONTRIBUTOR | null | This PR allows "other-X" licenses during metadata validation.
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2368/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2368",
"html_url": "https://github.com/huggingface/datasets/pull/2368",
"diff_url": "https://github.com/huggingface/datasets/pull/2368.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2368.patch",
"merged_at": 1621269387000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2367 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2367/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2367/comments | https://api.github.com/repos/huggingface/datasets/issues/2367/events | https://github.com/huggingface/datasets/pull/2367 | 893,317,427 | MDExOlB1bGxSZXF1ZXN0NjQ1ODUxNTE0 | 2,367 | Remove getchildren from hyperpartisan news detection | {
"login": "ghomasHudson",
"id": 13795113,
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghomasHudson",
"html_url": "https://github.com/ghomasHudson",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,621,257,037,000 | 1,621,260,433,000 | 1,621,260,433,000 | CONTRIBUTOR | null | `Element.getchildren()` is now deprecated in the ElementTree library (I think in python 3.9, so it still passes the automated tests which are using 3.6. But for those of us on bleeding-edge distros it now fails).
https://bugs.python.org/issue29209 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2367/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2367",
"html_url": "https://github.com/huggingface/datasets/pull/2367",
"diff_url": "https://github.com/huggingface/datasets/pull/2367.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2367.patch",
"merged_at": 1621260432000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2366/comments | https://api.github.com/repos/huggingface/datasets/issues/2366/events | https://github.com/huggingface/datasets/issues/2366 | 893,185,266 | MDU6SXNzdWU4OTMxODUyNjY= | 2,366 | Json loader fails if user-specified features don't match the json data fields order | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,621,247,168,000 | 1,623,840,469,000 | 1,623,840,469,000 | MEMBER | null | If you do
```python
dataset = load_dataset("json", data_files=data_files, features=features)
```
Then depending on the order of the features in the json data field it fails:
```python
[...]
~/Desktop/hf/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files)
94 if self.config.schema:
95 # Cast allows str <-> int/float, while parse_option explicit_schema does NOT
---> 96 pa_table = pa_table.cast(self.config.schema)
97 yield i, pa_table
[...]
ValueError: Target schema's field names are not matching the table's field names: ['tokens', 'ner_tags'], ['ner_tags', 'tokens']
```
This is because one must first re-order the columns of the table to match the `self.config.schema` before calling cast.
One way to fix the `cast` would be to replace it with:
```python
# reorder the arrays if necessary + cast to schema
# we can't simply use .cast here because we may need to change the order of the columns
pa_table = pa.Table.from_arrays([pa_table[name] for name in schema.names], schema=schema)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2366/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2365/comments | https://api.github.com/repos/huggingface/datasets/issues/2365/events | https://github.com/huggingface/datasets/issues/2365 | 893,179,697 | MDU6SXNzdWU4OTMxNzk2OTc= | 2,365 | Missing ClassLabel encoding in Json loader | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [] | 1,621,246,750,000 | 1,624,892,734,000 | 1,624,892,734,000 | MEMBER | null | Currently if you want to load a json dataset this way
```python
dataset = load_dataset("json", data_files=data_files, features=features)
```
Then if your features has ClassLabel types and if your json data needs class label encoding (i.e. if the labels in the json files are strings and not integers), then it would fail:
```python
[...]
~/Desktop/hf/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files)
94 if self.config.schema:
95 # Cast allows str <-> int/float, while parse_option explicit_schema does NOT
---> 96 pa_table = pa_table.cast(self.config.schema)
97 yield i, pa_table
[...]
ArrowInvalid: Failed to parse string: 'O' as a scalar of type int64
```
This is because it just tries to cast the string data to integers, without applying the mapping str->int first
The current workaround is to do instead
```python
dataset = load_dataset("json", data_files=data_files)
dataset = dataset.map(features.encode_example, features=features)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2365/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2365/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2364 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2364/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2364/comments | https://api.github.com/repos/huggingface/datasets/issues/2364/events | https://github.com/huggingface/datasets/pull/2364 | 892,420,500 | MDExOlB1bGxSZXF1ZXN0NjQ1MTI4MDYx | 2,364 | README updated for SNLI, MNLI | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Regarding the license issue, I think we should allow it since it starts with `other-`. Cc @gchhablani what do you think ?",
"@lhoestq I agree, I'll look into it."
] | 1,621,078,679,000 | 1,621,260,867,000 | 1,621,258,459,000 | CONTRIBUTOR | null | Closes #2275. Mentioned about -1 labels in MNLI, SNLI and how they should be removed before training. @lhoestq `check_code_quality` test might fail for MNLI as the license name `other-Open Portion of the American National Corpus` is not a registered tag for 'licenses' | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2364/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2364",
"html_url": "https://github.com/huggingface/datasets/pull/2364",
"diff_url": "https://github.com/huggingface/datasets/pull/2364.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2364.patch",
"merged_at": 1621258458000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2362 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2362/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2362/comments | https://api.github.com/repos/huggingface/datasets/issues/2362/events | https://github.com/huggingface/datasets/pull/2362 | 892,100,749 | MDExOlB1bGxSZXF1ZXN0NjQ0ODYzOTQw | 2,362 | Fix web_nlg metadata | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! `release_v2.1` and the others are dataset configuration names.\r\n\r\nThe configuration names are used to show the right code snippet in the UI to load the dataset.\r\nFor example if the parsing of the web_nlg tags worked correctly we would have:\r\n![image](https://user-images.githubusercontent.com/42851186/118475444-8d1e5d00-b70c-11eb-98e9-844d4daf6139.png)\r\n\r\nTherefore I don't think it's a good idea to rename the configurations from `release_v2.1` to `release_v2_1` as the code snippet would be wrong in this case.\r\n\r\nMoreover we can't really disallow dots in configuration names and rename the configurations since it would be a big breaking change. It's commonly used, especially with multilingual datasets. For example `load_dataset(\"indic_glue\", \"sna.bn\")`.\r\n\r\nIs this something that can be fixed on the moonlanding side instead ?",
"> Is this something that can be fixed on the moonlanding side instead ?\r\n\r\nNot really unless we change database:)\r\n\r\nWe'll maybe try to find another workaround, but super low-prio given that it's the only dataset that has those dotted keys in the YAML metadata",
"Ok, should we close this PR then ?"
] | 1,621,012,507,000 | 1,621,259,057,000 | 1,621,258,948,000 | MEMBER | null | Our metadata storage system does not support `.` inside keys. cc @Pierrci
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2362/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2362",
"html_url": "https://github.com/huggingface/datasets/pull/2362",
"diff_url": "https://github.com/huggingface/datasets/pull/2362.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2362.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2361 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2361/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2361/comments | https://api.github.com/repos/huggingface/datasets/issues/2361/events | https://github.com/huggingface/datasets/pull/2361 | 891,982,808 | MDExOlB1bGxSZXF1ZXN0NjQ0NzYzNTU4 | 2,361 | Preserve dtype for numpy/torch/tf/jax arrays | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq, \r\nIt turns out that pyarrow `ListArray` are not recognized as list-like when we get output from `numpy_to_pyarrow_listarray`. This might cause tests to fail. If possible can we convert that `ListArray` output to list inorder for tests to pass? Under the hood it'll maintain the dtype as that of numpy array passed during input only",
"Brought down the failing tests from 7 to 4. Let me know if that part looks good. Failing tests are looking quite similar. In `test_map_torch` https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1039 and `test_map_tf`https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1056 \r\nthey're expecting `float64`. Shouldn't that be `float32` now?",
"It's normal: pytorch and tensorflow use `float32` by default, unlike numpy which uses `float64`.\r\n\r\nI think that we should always keep the precision of the original tensor (torch/tf/numpy).\r\nIt means that as it is in this PR it's fine (the precision is conserved when doing the torch/tf -> numpy conversion).\r\n\r\nThis is a breaking change but in my opinion the fact that we had Value(\"float64\") for torch.float32 tensors was an issue already.\r\n\r\nLet me know what you think. Cc @albertvillanova if you have an opinion on this\r\n\r\nIf we agree on doing this breaking change, we can just change the test. ",
"Hi @lhoestq, \r\nMerged master into this branch. Only changing the test is left for now (mentioned below) after which all tests should pass.\r\n\r\n> Brought down the failing tests from 7 to 4. Let me know if that part looks good. Failing tests are looking quite similar. In `test_map_torch`\r\n> \r\n> https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1039\r\n> \r\n> and `test_map_tf`\r\n> https://github.com/huggingface/datasets/blob/3d46bc384f811435e59e3916faa3aa20a1cf87bc/tests/test_arrow_dataset.py#L1056\r\n> \r\n> \r\n> they're expecting `float64`. Shouldn't that be `float32` now?\r\n\r\n",
"> they're expecting float64. Shouldn't that be float32 now?\r\n\r\nYes feel free to update those tests :)\r\n\r\nIt would be nice to have the same test for JAX as well",
"Added same test for for JAX too. Also, I saw that I missed changing `test_cast_to_python_objects_jax` like I did for TF and PyTorch. Finished that as well"
] | 1,621,003,523,000 | 1,629,189,004,000 | 1,629,189,004,000 | CONTRIBUTOR | null | Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2361/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2361/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2361",
"html_url": "https://github.com/huggingface/datasets/pull/2361",
"diff_url": "https://github.com/huggingface/datasets/pull/2361.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2361.patch",
"merged_at": 1629189004000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2360 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2360/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2360/comments | https://api.github.com/repos/huggingface/datasets/issues/2360/events | https://github.com/huggingface/datasets/issues/2360 | 891,965,964 | MDU6SXNzdWU4OTE5NjU5NjQ= | 2,360 | Automatically detect datasets with compatible task schemas | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,621,002,220,000 | 1,621,002,220,000 | null | MEMBER | null | See description of #2255 for details.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2360/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2359/comments | https://api.github.com/repos/huggingface/datasets/issues/2359/events | https://github.com/huggingface/datasets/issues/2359 | 891,946,017 | MDU6SXNzdWU4OTE5NDYwMTc= | 2,359 | Allow model labels to be passed during task preparation | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,621,000,708,000 | 1,621,000,708,000 | null | MEMBER | null | Models have a config with label2id. And we have the same for datasets with the ClassLabel feature type. At one point either the model or the dataset must sync with the other. It would be great to do that on the dataset side.
For example for sentiment classification on amazon reviews with you could have these labels:
- "1 star", "2 stars", "3 stars", "4 stars", "5 stars"
- "1", "2", "3", "4", "5"
Some models may use the first set, while other models use the second set.
Here in the `TextClassification` class, the user can only specify one set of labels, while many models could actually be compatible but have different sets of labels. Should we allow users to pass a list of compatible labels sets ?
Then in terms of API, users could use `dataset.prepare_for_task("text-classification", labels=model.labels)` or something like that.
The label set could also be the same but not in the same order. For NLI for example, some models use `["neutral", "entailment", "contradiction"]` and some others use `["neutral", "contradiction", "entailment"]`, so we should take care of updating the order of the labels in the dataset to match the labels order of the model.
Let me know what you think ! This can be done in a future PR
_Originally posted by @lhoestq in https://github.com/huggingface/datasets/pull/2255#discussion_r632412792_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2359/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2358 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2358/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2358/comments | https://api.github.com/repos/huggingface/datasets/issues/2358/events | https://github.com/huggingface/datasets/pull/2358 | 891,269,577 | MDExOlB1bGxSZXF1ZXN0NjQ0MTYyOTY2 | 2,358 | Roman Urdu Stopwords List | {
"login": "devzohaib",
"id": 58664161,
"node_id": "MDQ6VXNlcjU4NjY0MTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/58664161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/devzohaib",
"html_url": "https://github.com/devzohaib",
"followers_url": "https://api.github.com/users/devzohaib/followers",
"following_url": "https://api.github.com/users/devzohaib/following{/other_user}",
"gists_url": "https://api.github.com/users/devzohaib/gists{/gist_id}",
"starred_url": "https://api.github.com/users/devzohaib/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/devzohaib/subscriptions",
"organizations_url": "https://api.github.com/users/devzohaib/orgs",
"repos_url": "https://api.github.com/users/devzohaib/repos",
"events_url": "https://api.github.com/users/devzohaib/events{/privacy}",
"received_events_url": "https://api.github.com/users/devzohaib/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for sharing :)\r\nI think the best place to share this is probably the `Languages at Hugging Face` section of the forum:\r\nhttps://discuss.huggingface.co/c/languages-at-hugging-face/15\r\n\r\nSince this is not a dataset, I'm closing this PR if you don't mind",
"Thank you I will look into the link that you have shared with me.\n\n\n\n\nOn Mon, May 17, 2021 at 7:05 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> Closed #2358 <https://github.com/huggingface/datasets/pull/2358>.\n>\n> โ\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/2358#event-4754836267>, or\n> unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AN7SJYJVY4C5XQRDNET743DTOEPC7ANCNFSM443AZ3MA>\n> .\n>\n"
] | 1,620,930,567,000 | 1,621,414,243,000 | 1,621,260,310,000 | NONE | null | A list of most frequently used Roman Urdu words with different spellings and usages.
This is a very basic effort to collect some basic stopwords for Roman Urdu to help efforts of analyzing text data in roman Urdu which makes up a huge part of daily internet interaction of Roman-Urdu users. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2358/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2358",
"html_url": "https://github.com/huggingface/datasets/pull/2358",
"diff_url": "https://github.com/huggingface/datasets/pull/2358.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2358.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2357 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2357/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2357/comments | https://api.github.com/repos/huggingface/datasets/issues/2357/events | https://github.com/huggingface/datasets/pull/2357 | 890,595,693 | MDExOlB1bGxSZXF1ZXN0NjQzNTk0NDcz | 2,357 | Adding Microsoft CodeXGlue Datasets | {
"login": "ncoop57",
"id": 7613470,
"node_id": "MDQ6VXNlcjc2MTM0NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ncoop57",
"html_url": "https://github.com/ncoop57",
"followers_url": "https://api.github.com/users/ncoop57/followers",
"following_url": "https://api.github.com/users/ncoop57/following{/other_user}",
"gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions",
"organizations_url": "https://api.github.com/users/ncoop57/orgs",
"repos_url": "https://api.github.com/users/ncoop57/repos",
"events_url": "https://api.github.com/users/ncoop57/events{/privacy}",
"received_events_url": "https://api.github.com/users/ncoop57/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Oh one other thing. Mentioned in the PR was that I would need to regenerate the dataset_infos.json once the camel casing was done. However, I am unsure why this is the case since there is no reference to any object names in the dataset_infos.json file.\r\n\r\nIf it needs to be reran, I can try it do it on my own machine, but I've had a memory issues with a previous dataset due to my compute constraints so I'd prefer to hopefully avoid it all together if not necessary to regenerate.",
"Was just reviewing the `builder_name`s of each dataset and it seems like it is already following this format:\r\n\r\n`CodeXGlueCcCloneDetectionBigCloneBenchMain -> code_x_glue_cc_clone_detection_big_clone_bench_main` Is there a location I am missing?",
"> Was just reviewing the `builder_name`s of each dataset and it seems like it is already following this format:\r\n> \r\n> `CodeXGlueCcCloneDetectionBigCloneBenchMain -> code_x_glue_cc_clone_detection_big_clone_bench_main` Is there a location I am missing?\r\n\r\nIf it's already in this format then it's fine thanks ! It's all good then\r\n\r\nTo fix the CI you just need to add the `encoding=` parameters to the `open()` calls",
"@lhoestq I think everything should be good to go besides the code styling, which seem to be due to missing or unsupported metadata tags for the READMEs, is this something I should worry about since all the other datasets seem to be failing as well?",
"Awesome! Just committed your changes and I will begin on adding the TOCs and filling in the content for the new sections/subsections.\r\n\r\nAlso, I see that we are having to only use the `code` tag instead of individual langs and I get that is required for indexing or showing available tags on the datasets hub. However, as a future feature, it might be good to add tags for individual programming languages to make it easier to search.",
"> Also, I see that we are having to only use the code tag instead of individual langs and I get that is required for indexing or showing available tags on the datasets hub. However, as a future feature, it might be good to add tags for individual programming languages to make it easier to search.\r\n\r\nYes I agree. We'll be able to reuse the tags per programming language from this PR when we allow this feature\r\n\r\ncc @yjernite what do you think about extending our languages taxonomy to programming languages ?",
"Hey @lhoestq, just finalizing the READMEs and testing them against the automated test. For the non, WIN tests, it seems like there is some dependency issue that doesn't have to do with the new datasets. For the WIN tests, it looks like some of the headings are mislabeled such as \"Supported Tasks and Leaderboards\" -> \"Supported Tasks\" in the TOC you posted. Should I base my TOC on the one you posted or on the one that the test script is using? Also, it throws errors for some of the fields being empty, such as \"Source Data\" in the `code_x_glue_tt_text_to_text` dataset. However, I am not familiar with this dataset, so I put the `[More Information Needed]` stub, similar to the other sections I couldn't easily answer. For some of the sections like \"Source Data\", is this info required?",
"Yes you're right, it is `Supported Tasks and Leaderboards` that we need to use, sorry about that\r\n\r\nI also noticed the same for the splits section: we have to use `Data Splits` (not Data Splits Sample Size)\r\n",
"Some subsections are also missing: `Initial Data Collection and Normalization`, `Who are the source language producers?`.\r\nIf you are interested you can fill those sections as well, or leave them empty for now.\r\nThis will also fix the error regarding \"Source Data\"\r\n\r\nYou can see the template of the readme here:\r\nhttps://github.com/huggingface/datasets/blob/9d8bf36fdb861d9b2922d7c782fb58f9f542997c/templates/README.md",
"> > Also, I see that we are having to only use the code tag instead of individual langs and I get that is required for indexing or showing available tags on the datasets hub. However, as a future feature, it might be good to add tags for individual programming languages to make it easier to search.\r\n> \r\n> Yes I agree. We'll be able to reuse the tags per programming language from this PR when we allow this feature\r\n> \r\n> cc @yjernite what do you think about extending our languages taxonomy to programming languages ?\r\n\r\nSounds good, as long as they all share a prefix! maybe `code_cpp`, `code_java`, etc. ? \r\n\r\nI don't think we currently have `_` in language codes/names, but also don't see what it would break *a priori*",
"We don't use `_` but there are some languages that use `-` though like `en-US`. Let's use `-` maybe, to match the same hierarchy pattern ?",
"Hi guys, I just started working on https://github.com/huggingface/datasets/pull/997 this morning and I just realized that you were finishing it... You may want to get the dataset cards from https://github.com/madlag/datasets, and maybe some code too, as I did a few things like moving _CITATION and _DESCRIPTION to globals.\r\n\r\n",
"I am renaming the main classes to match the dataset names, for example : CodeXGlueTcTextToCodeMain -> CodeXGlueTcTextToCode . And I am regenerating the dataset_infos.json accordingly.",
"Thanks for renaming the classes and updating the dataset_infos.json ! This looks all clean now :)\r\n\r\nThis PR looks all good to me :) One just needs to merge master into this branch to make sure the CI is green with the latest changes. It should also fix the current CI issues that are not related to this PR",
"Woot woot :rocket:! All green, looks like it is ready for showtime. Thank you both @lhoestq and especially @madlag, I think these datasets are going to be a great new addition to :hugs: datasets and I can't wait to use them in my research :nerd_face:.",
"Thanks @ncoop57 for you contribution! It will be really cool to see those datasets used as soon as they are released !"
] | 1,620,866,581,000 | 1,623,144,597,000 | 1,623,144,597,000 | CONTRIBUTOR | null | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2357/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2357",
"html_url": "https://github.com/huggingface/datasets/pull/2357",
"diff_url": "https://github.com/huggingface/datasets/pull/2357.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2357.patch",
"merged_at": 1623144597000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2355 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2355/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2355/comments | https://api.github.com/repos/huggingface/datasets/issues/2355/events | https://github.com/huggingface/datasets/pull/2355 | 890,484,408 | MDExOlB1bGxSZXF1ZXN0NjQzNDk5NTIz | 2,355 | normalized TOCs and titles in data cards | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Oh right! I'd be in favor of still having the same TOC across the board, we can either leave it as is or add a `[More Info Needed]` `Contributions` Section wherever it's currently missing, wdyt?",
"(I thought those were programmatically updated based on git history :D )",
"Merging for now to avoid conflict since there are so many changes but let's figure out the contributions section next ;) "
] | 1,620,853,199,000 | 1,620,998,592,000 | 1,620,998,592,000 | MEMBER | null | I started fixing some of the READMEs that were failing the tests introduced by @gchhablani but then realized that there were some consistent differences between earlier and newer versions of some of the titles (e.g. Data Splits vs Data Splits Sample Size, Supported Tasks vs Supported Tasks and Leaderboards). We also had different versions of the Table of Content
This PR normalizes all of them to the newer version | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2355/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2355/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2355",
"html_url": "https://github.com/huggingface/datasets/pull/2355",
"diff_url": "https://github.com/huggingface/datasets/pull/2355.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2355.patch",
"merged_at": 1620998592000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2354 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2354/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2354/comments | https://api.github.com/repos/huggingface/datasets/issues/2354/events | https://github.com/huggingface/datasets/issues/2354 | 890,439,523 | MDU6SXNzdWU4OTA0Mzk1MjM= | 2,354 | Document DatasetInfo attributes | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,620,849,689,000 | 1,621,675,574,000 | 1,621,675,574,000 | MEMBER | null | **Is your feature request related to a problem? Please describe.**
As noted in PR #2255, the attributes of `DatasetInfo` are not documented in the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=datasetinfo#datasetinfo). It would be nice to do so :)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2354/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2353 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2353/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2353/comments | https://api.github.com/repos/huggingface/datasets/issues/2353/events | https://github.com/huggingface/datasets/pull/2353 | 890,296,262 | MDExOlB1bGxSZXF1ZXN0NjQzMzM4MDcz | 2,353 | Update README vallidation rules | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,620,838,646,000 | 1,620,982,566,000 | 1,620,982,566,000 | CONTRIBUTOR | null | This PR allows unexpected subsections under third-level headings. All except `Contributions`.
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2353/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2353",
"html_url": "https://github.com/huggingface/datasets/pull/2353",
"diff_url": "https://github.com/huggingface/datasets/pull/2353.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2353.patch",
"merged_at": 1620982566000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2352 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2352/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2352/comments | https://api.github.com/repos/huggingface/datasets/issues/2352/events | https://github.com/huggingface/datasets/pull/2352 | 889,810,100 | MDExOlB1bGxSZXF1ZXN0NjQyOTI4NTgz | 2,352 | Set to_json default to JSON lines | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This is perfect, @albertvillanova - thank you! Tested it to work.\r\n\r\nMight it be a good idea to document the args to `to_json`?\r\n\r\nand also even a very basic progress bar? took 10min for 8M large records for `openwebtext` so perhaps some indication of it's being alive every min or so?",
"@lhoestq I added tests for both `lines` and `orient`."
] | 1,620,807,565,000 | 1,621,587,674,000 | 1,621,587,673,000 | MEMBER | null | With this PR, the method `Dataset.to_json`:
- is added to the docs
- defaults to JSON lines | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2352/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2352/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2352",
"html_url": "https://github.com/huggingface/datasets/pull/2352",
"diff_url": "https://github.com/huggingface/datasets/pull/2352.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2352.patch",
"merged_at": 1621587673000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2351 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2351/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2351/comments | https://api.github.com/repos/huggingface/datasets/issues/2351/events | https://github.com/huggingface/datasets/pull/2351 | 889,584,953 | MDExOlB1bGxSZXF1ZXN0NjQyNzI5NDIz | 2,351 | simpllify faiss index save | {
"login": "Guitaricet",
"id": 2821124,
"node_id": "MDQ6VXNlcjI4MjExMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2821124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guitaricet",
"html_url": "https://github.com/Guitaricet",
"followers_url": "https://api.github.com/users/Guitaricet/followers",
"following_url": "https://api.github.com/users/Guitaricet/following{/other_user}",
"gists_url": "https://api.github.com/users/Guitaricet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Guitaricet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Guitaricet/subscriptions",
"organizations_url": "https://api.github.com/users/Guitaricet/orgs",
"repos_url": "https://api.github.com/users/Guitaricet/repos",
"events_url": "https://api.github.com/users/Guitaricet/events{/privacy}",
"received_events_url": "https://api.github.com/users/Guitaricet/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,620,791,650,000 | 1,621,258,901,000 | 1,621,258,901,000 | CONTRIBUTOR | null | Fixes #2350
In some cases, Faiss GPU index objects do not have neither "device" nor "getDevice". Possibly this happens when some part of the index is computed on CPU.
In particular, this would happen with the index `OPQ16_128,IVF512,PQ32` (issue #2350). I did check it, but it is likely that `OPQ` or `PQ` transforms cause it.
I propose, instead of using the index object to get the device, to infer it form the `FaissIndex.device` field as it is done in `.add_vectors`. Here we assume that `.device` always corresponds to the index placement and it seems reasonable. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2351/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2351",
"html_url": "https://github.com/huggingface/datasets/pull/2351",
"diff_url": "https://github.com/huggingface/datasets/pull/2351.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2351.patch",
"merged_at": 1621258901000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2350/comments | https://api.github.com/repos/huggingface/datasets/issues/2350/events | https://github.com/huggingface/datasets/issues/2350 | 889,580,247 | MDU6SXNzdWU4ODk1ODAyNDc= | 2,350 | `FaissIndex.save` throws error on GPU | {
"login": "Guitaricet",
"id": 2821124,
"node_id": "MDQ6VXNlcjI4MjExMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2821124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guitaricet",
"html_url": "https://github.com/Guitaricet",
"followers_url": "https://api.github.com/users/Guitaricet/followers",
"following_url": "https://api.github.com/users/Guitaricet/following{/other_user}",
"gists_url": "https://api.github.com/users/Guitaricet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Guitaricet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Guitaricet/subscriptions",
"organizations_url": "https://api.github.com/users/Guitaricet/orgs",
"repos_url": "https://api.github.com/users/Guitaricet/repos",
"events_url": "https://api.github.com/users/Guitaricet/events{/privacy}",
"received_events_url": "https://api.github.com/users/Guitaricet/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Just in case, this is a workaround that I use in my code and it seems to do the job.\r\n\r\n```python\r\nif use_gpu_index:\r\n data[\"train\"]._indexes[\"text_emb\"].faiss_index = faiss.index_gpu_to_cpu(data[\"train\"]._indexes[\"text_emb\"].faiss_index)\r\n```"
] | 1,620,790,916,000 | 1,621,258,901,000 | 1,621,258,901,000 | CONTRIBUTOR | null | ## Describe the bug
After training an index with a factory string `OPQ16_128,IVF512,PQ32` on GPU, `.save_faiss_index` throws this error.
```
File "index_wikipedia.py", line 119, in <module>
data["train"].save_faiss_index("text_emb", index_save_path)
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/datasets/search.py", line 470, in save_faiss_index
index.save(file)
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/datasets/search.py", line 334, in save
faiss.write_index(index, str(file))
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/faiss/swigfaiss_avx2.py", line 5654, in write_index
return _swigfaiss.write_index(*args)
RuntimeError: Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /root/miniconda3/conda-bld/faiss-pkg_1613235005464/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index
```
## Steps to reproduce the bug
Any dataset will do, I just selected a familiar one.
```python
import numpy as np
import datasets
INDEX_STR = "OPQ16_128,IVF512,PQ32"
INDEX_SAVE_PATH = "will_not_save.faiss"
data = datasets.load_dataset("Fraser/news-category-dataset", split=f"train[:10000]")
def encode(item):
return {"text_emb": np.random.randn(768).astype(np.float32)}
data = data.map(encode)
data.add_faiss_index(column="text_emb", string_factory=INDEX_STR, train_size=10_000, device=0)
data.save_faiss_index("text_emb", INDEX_SAVE_PATH)
```
## Expected results
Saving the index
## Actual results
Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) ... don't know how to serialize this type of index
## Environment info
- `datasets` version: 1.6.2
- Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
I will be proposing a fix in a couple of minutes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2350/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2349/comments | https://api.github.com/repos/huggingface/datasets/issues/2349/events | https://github.com/huggingface/datasets/pull/2349 | 888,586,018 | MDExOlB1bGxSZXF1ZXN0NjQxNzYzNzg3 | 2,349 | Update task_ids for Ascent KB | {
"login": "phongnt570",
"id": 6749421,
"node_id": "MDQ6VXNlcjY3NDk0MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6749421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phongnt570",
"html_url": "https://github.com/phongnt570",
"followers_url": "https://api.github.com/users/phongnt570/followers",
"following_url": "https://api.github.com/users/phongnt570/following{/other_user}",
"gists_url": "https://api.github.com/users/phongnt570/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phongnt570/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phongnt570/subscriptions",
"organizations_url": "https://api.github.com/users/phongnt570/orgs",
"repos_url": "https://api.github.com/users/phongnt570/repos",
"events_url": "https://api.github.com/users/phongnt570/events{/privacy}",
"received_events_url": "https://api.github.com/users/phongnt570/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,620,765,873,000 | 1,621,248,794,000 | 1,621,248,514,000 | CONTRIBUTOR | null | This "other-other-knowledge-base" task is better suited for the dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2349/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2349",
"html_url": "https://github.com/huggingface/datasets/pull/2349",
"diff_url": "https://github.com/huggingface/datasets/pull/2349.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2349.patch",
"merged_at": 1621248514000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2348 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2348/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2348/comments | https://api.github.com/repos/huggingface/datasets/issues/2348/events | https://github.com/huggingface/datasets/pull/2348 | 887,927,737 | MDExOlB1bGxSZXF1ZXN0NjQxMTMwOTM4 | 2,348 | Add tests for dataset cards | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq\r\n\r\nShould I remove the scripts? or atleast remove running them from the CircleCI config?\r\n\r\nAlso, I hope it is okay that the combined method (metadata+content) is only a slow test, and for the Circle CI, I assume only non-slow tests are run? If yes, this would mean separate tests for content and metadata.",
"Also feel free to remove the scripts from the CI and also remove the scripts files :)"
] | 1,620,753,267,000 | 1,621,599,047,000 | 1,621,599,047,000 | CONTRIBUTOR | null | Adding tests for dataset cards
This PR will potentially remove the scripts being used for dataset tags and readme validation.
Additionally, this will allow testing dataset readmes by providing the name as follows:
```bash
pytest tests/test_dataset_cards.py::test_dataset_tags[fashion_mnist]
```
and
```bash
pytest tests/test_dataset_cards.py::test_readme_content[fashion_mnist]
```
or a combined test as:
```bash
pytest tests/test_dataset_cards.py::test_dataset_card[fashion_mnist]
```
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2348/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2348/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2348",
"html_url": "https://github.com/huggingface/datasets/pull/2348",
"diff_url": "https://github.com/huggingface/datasets/pull/2348.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2348.patch",
"merged_at": 1621599047000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2347/comments | https://api.github.com/repos/huggingface/datasets/issues/2347/events | https://github.com/huggingface/datasets/issues/2347 | 887,404,868 | MDU6SXNzdWU4ODc0MDQ4Njg= | 2,347 | Add an API to access the language and pretty name of a dataset | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! With @bhavitvyamalik we discussed about having something like\r\n```python\r\nfrom datasets import load_dataset_card\r\n\r\ndataset_card = load_dataset_card(\"squad\")\r\nprint(dataset_card.metadata.pretty_name)\r\n# Stanford Question Answering Dataset (SQuAD)\r\nprint(dataset_card.metadata.languages)\r\n# [\"en\"]\r\n\r\n```\r\nWhat do you think ?\r\n\r\nI don't know if you already have a way to load the model tags in `transformers` but we can agree on the API to have something consistent.\r\n\r\nAlso note that the pretty name would only be used to show users something prettier than a dataset id, but in the end the source of truth will stay the dataset id (here `squad`).",
"That works for me!",
"maybe use the hub-backed dataset_info method? (so there's only one parser of README.md metadata)?",
"What dataset_info method are you talking about @julien-c ? In `huggingface_hub` I can only see `model_info`.",
"hmm the equivalent method in `datasets` (which could go into `huggingface_hub` at some point)"
] | 1,620,742,208,000 | 1,621,589,206,000 | null | MEMBER | null | It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Transformers examples scripts. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2347/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2347/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2346/comments | https://api.github.com/repos/huggingface/datasets/issues/2346/events | https://github.com/huggingface/datasets/pull/2346 | 886,632,114 | MDExOlB1bGxSZXF1ZXN0NjM5OTAzMjk3 | 2,346 | Add Qasper Dataset | {
"login": "cceyda",
"id": 15624271,
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cceyda",
"html_url": "https://github.com/cceyda",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"repos_url": "https://api.github.com/users/cceyda/repos",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I saw that the README [template](https://github.com/huggingface/datasets/blob/master/templates/README.md) changed while I was working on this ๐
Some TOC titles may be different but I filled it to the best of my knowledge & readme quality check passes now.\r\nready for review @lhoestq "
] | 1,620,725,144,000 | 1,621,340,908,000 | 1,621,340,908,000 | CONTRIBUTOR | null | [Question Answering on Scientific Research Papers](https://allenai.org/project/qasper/home)
Doing NLP on NLP papers to do NLP โป๏ธ I had to add it~
- [x] Add README (just gotta fill out some more )
- [x] Dataloader code
- [x] Make dummy dataset
- [x] generate dataset infos
- [x] Tests
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2346/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2346/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2346",
"html_url": "https://github.com/huggingface/datasets/pull/2346",
"diff_url": "https://github.com/huggingface/datasets/pull/2346.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2346.patch",
"merged_at": 1621340907000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2345/comments | https://api.github.com/repos/huggingface/datasets/issues/2345/events | https://github.com/huggingface/datasets/issues/2345 | 886,586,872 | MDU6SXNzdWU4ODY1ODY4NzI= | 2,345 | [Question] How to move and reuse preprocessed dataset? | {
"login": "AtmaHou",
"id": 15045402,
"node_id": "MDQ6VXNlcjE1MDQ1NDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/15045402?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AtmaHou",
"html_url": "https://github.com/AtmaHou",
"followers_url": "https://api.github.com/users/AtmaHou/followers",
"following_url": "https://api.github.com/users/AtmaHou/following{/other_user}",
"gists_url": "https://api.github.com/users/AtmaHou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AtmaHou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AtmaHou/subscriptions",
"organizations_url": "https://api.github.com/users/AtmaHou/orgs",
"repos_url": "https://api.github.com/users/AtmaHou/repos",
"events_url": "https://api.github.com/users/AtmaHou/events{/privacy}",
"received_events_url": "https://api.github.com/users/AtmaHou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq @LysandreJik",
"<s>Hi :) Can you share with us the code you used ?</s>\r\n\r\nEDIT: from https://github.com/huggingface/transformers/issues/11665#issuecomment-838348291 I understand you're using the run_clm.py script. Can you share your logs ?\r\n",
"Also note that for the caching to work, you must reuse the exact same parameters as in the first run. Did you change any parameter ? The `preprocessing_num_workers` should also stay the same",
"> Also note that for the caching to work, you must reuse the exact same parameters as in the first run. Did you change any parameter ? The `preprocessing_num_workers` should also stay the same\r\n\r\nI only changed the `preprocessing_num_workers` maybe it is the problem~ I will try again~"
] | 1,620,724,157,000 | 1,623,386,351,000 | 1,623,386,351,000 | NONE | null | Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_dir/"
but the program still re-preprocess the whole dataset without loading cache.
I also tried to torch.save(lm_datasets, fw), but the saved file is only 14M.
What is the proper way to do this? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2345/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2344/comments | https://api.github.com/repos/huggingface/datasets/issues/2344/events | https://github.com/huggingface/datasets/issues/2344 | 885,331,505 | MDU6SXNzdWU4ODUzMzE1MDU= | 2,344 | Is there a way to join multiple datasets in one? | {
"login": "alexvaca0",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexvaca0",
"html_url": "https://github.com/alexvaca0",
"followers_url": "https://api.github.com/users/alexvaca0/followers",
"following_url": "https://api.github.com/users/alexvaca0/following{/other_user}",
"gists_url": "https://api.github.com/users/alexvaca0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexvaca0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexvaca0/subscriptions",
"organizations_url": "https://api.github.com/users/alexvaca0/orgs",
"repos_url": "https://api.github.com/users/alexvaca0/repos",
"events_url": "https://api.github.com/users/alexvaca0/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexvaca0/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! We don't have `join`/`merge` on a certain column as in pandas.\r\nMaybe you can just use the [concatenate_datasets](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate#datasets.concatenate_datasets) function.\r\n"
] | 1,620,688,570,000 | 1,620,721,488,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
I need to join 2 datasets, one that is in the hub and another I've created from my files. Is there an easy way to join these 2?
**Describe the solution you'd like**
Id like to join them with a merge or join method, just like pandas dataframes.
**Additional context**
If you want to extend an existing dataset with more data, for example for training a language model, you need that functionality. I've not found it in the documentation. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2344/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2343/comments | https://api.github.com/repos/huggingface/datasets/issues/2343/events | https://github.com/huggingface/datasets/issues/2343 | 883,208,539 | MDU6SXNzdWU4ODMyMDg1Mzk= | 2,343 | Columns are removed before or after map function applied? | {
"login": "taghizad3h",
"id": 8199406,
"node_id": "MDQ6VXNlcjgxOTk0MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8199406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taghizad3h",
"html_url": "https://github.com/taghizad3h",
"followers_url": "https://api.github.com/users/taghizad3h/followers",
"following_url": "https://api.github.com/users/taghizad3h/following{/other_user}",
"gists_url": "https://api.github.com/users/taghizad3h/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taghizad3h/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taghizad3h/subscriptions",
"organizations_url": "https://api.github.com/users/taghizad3h/orgs",
"repos_url": "https://api.github.com/users/taghizad3h/repos",
"events_url": "https://api.github.com/users/taghizad3h/events{/privacy}",
"received_events_url": "https://api.github.com/users/taghizad3h/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi! Columns will be removed **after** applying the function and **before** updating the examples with the function's output (as per the docs [here](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.map.remove_columns)). I agree the docs on this should be more clear."
] | 1,620,614,180,000 | 1,654,103,539,000 | null | NONE | null | ## Describe the bug
According to the documentation when applying map function the [remove_columns ](https://huggingface.co/docs/datasets/processing.html#removing-columns) will be removed after they are passed to the function, but in the [source code](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map) it's documented that they are removed before applying function. I thinks the source code doc is more accurate, right?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2343/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2342/comments | https://api.github.com/repos/huggingface/datasets/issues/2342/events | https://github.com/huggingface/datasets/pull/2342 | 882,981,420 | MDExOlB1bGxSZXF1ZXN0NjM2NDg0MzM3 | 2,342 | Docs - CER above 1 | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,620,603,660,000 | 1,620,653,640,000 | 1,620,653,640,000 | CONTRIBUTOR | null | CER can actually be greater than 1. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2342/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2342",
"html_url": "https://github.com/huggingface/datasets/pull/2342",
"diff_url": "https://github.com/huggingface/datasets/pull/2342.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2342.patch",
"merged_at": 1620653640000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2341/comments | https://api.github.com/repos/huggingface/datasets/issues/2341/events | https://github.com/huggingface/datasets/pull/2341 | 882,370,933 | MDExOlB1bGxSZXF1ZXN0NjM1OTExODI2 | 2,341 | Added the Ascent KB | {
"login": "phongnt570",
"id": 6749421,
"node_id": "MDQ6VXNlcjY3NDk0MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6749421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phongnt570",
"html_url": "https://github.com/phongnt570",
"followers_url": "https://api.github.com/users/phongnt570/followers",
"following_url": "https://api.github.com/users/phongnt570/following{/other_user}",
"gists_url": "https://api.github.com/users/phongnt570/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phongnt570/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phongnt570/subscriptions",
"organizations_url": "https://api.github.com/users/phongnt570/orgs",
"repos_url": "https://api.github.com/users/phongnt570/repos",
"events_url": "https://api.github.com/users/phongnt570/events{/privacy}",
"received_events_url": "https://api.github.com/users/phongnt570/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for approving it!"
] | 1,620,569,859,000 | 1,620,724,619,000 | 1,620,724,619,000 | CONTRIBUTOR | null | Added the Ascent Commonsense KB of 8.9M assertions.
- Paper: [Advanced Semantics for Commonsense Knowledge Extraction (WWW'21)](https://arxiv.org/abs/2011.00905)
- Website: https://ascent.mpi-inf.mpg.de/
(I am the author of the dataset) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2341/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2341",
"html_url": "https://github.com/huggingface/datasets/pull/2341",
"diff_url": "https://github.com/huggingface/datasets/pull/2341.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2341.patch",
"merged_at": 1620724618000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2340/comments | https://api.github.com/repos/huggingface/datasets/issues/2340/events | https://github.com/huggingface/datasets/pull/2340 | 882,370,824 | MDExOlB1bGxSZXF1ZXN0NjM1OTExNzIx | 2,340 | More consistent copy logic | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,620,569,853,000 | 1,620,723,513,000 | 1,620,723,513,000 | CONTRIBUTOR | null | Use `info.copy()` instead of `copy.deepcopy(info)`.
`Features.copy` now creates a deep copy. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2340/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2340",
"html_url": "https://github.com/huggingface/datasets/pull/2340",
"diff_url": "https://github.com/huggingface/datasets/pull/2340.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2340.patch",
"merged_at": 1620723513000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2338 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2338/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2338/comments | https://api.github.com/repos/huggingface/datasets/issues/2338/events | https://github.com/huggingface/datasets/pull/2338 | 882,046,077 | MDExOlB1bGxSZXF1ZXN0NjM1NjA3NzQx | 2,338 | fixed download link for web_science | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,620,551,540,000 | 1,620,653,753,000 | 1,620,653,753,000 | CONTRIBUTOR | null | Fixes #2337. Should work with:
`dataset = load_dataset("web_of_science", "WOS11967", ignore_verifications=True)` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2338/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2338",
"html_url": "https://github.com/huggingface/datasets/pull/2338",
"diff_url": "https://github.com/huggingface/datasets/pull/2338.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2338.patch",
"merged_at": 1620653753000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2337/comments | https://api.github.com/repos/huggingface/datasets/issues/2337/events | https://github.com/huggingface/datasets/issues/2337 | 881,610,567 | MDU6SXNzdWU4ODE2MTA1Njc= | 2,337 | NonMatchingChecksumError for web_of_science dataset | {
"login": "nbroad1881",
"id": 24982805,
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbroad1881",
"html_url": "https://github.com/nbroad1881",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I've raised a PR for this. Should work with `dataset = load_dataset(\"web_of_science\", \"WOS11967\", ignore_verifications=True)`once it gets merged into the main branch. Thanks for reporting this! "
] | 1,620,525,722,000 | 1,620,653,753,000 | 1,620,653,753,000 | NONE | null | NonMatchingChecksumError when trying to download the web_of_science dataset.
>NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://data.mendeley.com/datasets/9rw3vkcfy4/6/files/c9ea673d-5542-44c0-ab7b-f1311f7d61df/WebOfScience.zip?dl=1']
Setting `ignore_verfications=True` results in OSError.
>OSError: Cannot find data file.
Original error:
[Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/37ab2c42f50d553c1d0ea432baca3e9e11fedea4aeec63a81e6b7e25dd10d4e7/WOS5736/X.txt'
```python
dataset = load_dataset('web_of_science', 'WOS5736')
```
There are 3 data instances and they all don't work. 'WOS5736', 'WOS11967', 'WOS46985'
datasets 1.6.2
python 3.7.10
Ubuntu 18.04.5 LTS | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2337/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2336 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2336/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2336/comments | https://api.github.com/repos/huggingface/datasets/issues/2336/events | https://github.com/huggingface/datasets/pull/2336 | 881,298,783 | MDExOlB1bGxSZXF1ZXN0NjM0ODk1OTU5 | 2,336 | Fix overflow issue in interpolation search | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"~~Seems like the CI failure is unrelated to this PR~~ (fixed with the merge). \r\n\r\n@lhoestq Can you please verify that everything is OK in terms of speed? Another solution is to change the offsets array dtype to np.int64 (but this doesn't scale in theory compared to Python integer which is unbound). I'm not sure why on my 64-bit machine the default numpy dtype is np.int32 tho.",
"Hi ! Thanks for the fix.\r\nUnfortunately in terms of speed this is not acceptable :/\r\nThe `get_batch_of_1024_random_rows` metric or the `benchmark_getitem_100B ` benchmark is almost at 1sec instead of a few milliseconds.\r\n\r\nWould it be possible to avoid the overflow by simply passing `dtype=np.int64` to `np.cumsum` ?\r\nOn windows machines the default is int32 unfortunately so we have to force the dtype to be int64\r\n\r\n",
"Yes, casting the array to np.int64 should work as well. Another option would be to cast the array elements (`arr[i], arr[j]`) in interpolation search to Python integers (bound only with memory) before multiplication (the error stems from this part: `(j - i) * (x - arr[i])`) when working with big values. But for now, the first option is OK for the sake of simplicity."
] | 1,620,507,096,000 | 1,620,653,347,000 | 1,620,653,172,000 | CONTRIBUTOR | null | Fixes #2335
More info about this error can be found [here](https://stackoverflow.com/questions/53239890/why-do-i-keep-getting-this-error-runtimewarning-overflow-encountered-in-int-sc/53240100). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2336/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2336",
"html_url": "https://github.com/huggingface/datasets/pull/2336",
"diff_url": "https://github.com/huggingface/datasets/pull/2336.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2336.patch",
"merged_at": 1620653172000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2335/comments | https://api.github.com/repos/huggingface/datasets/issues/2335/events | https://github.com/huggingface/datasets/issues/2335 | 881,291,887 | MDU6SXNzdWU4ODEyOTE4ODc= | 2,335 | Index error in Dataset.map | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,620,506,697,000 | 1,620,653,172,000 | 1,620,653,172,000 | CONTRIBUTOR | null | The following code, if executed on master, raises an IndexError (due to overflow):
```python
>>> from datasets import *
>>> d = load_dataset("bookcorpus", split="train")
Reusing dataset bookcorpus (C:\Users\Mario\.cache\huggingface\datasets\bookcorpus\plain_text\1.0.0\44662c4a114441c35200992bea923b170e6f13f2f0beb7c14e43759cec498700)
2021-05-08 21:23:46.859818: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
>>> d.map(lambda ex: ex)
0%|โ | 289430/74004228 [00:13<58:41, 20935.33ex/s]c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py:84: RuntimeWarning: overflow encountered in int_scalars
k = i + ((j - i) * (x - arr[i]) // (arr[j] - arr[i]))
0%|โ | 290162/74004228 [00:13<59:11, 20757.23ex/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1498, in map
new_fingerprint=new_fingerprint,
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 174, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\fingerprint.py", line 340, in wrapper
out = func(self, *args, **kwargs)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1799, in _map_single
for i, example in enumerate(pbar):
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\site-packages\tqdm\std.py", line 1133, in __iter__
for obj in iterable:
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1145, in __iter__
format_kwargs=format_kwargs,
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1337, in _getitem
pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\formatting\formatting.py", line 368, in query_table
pa_subtable = _query_table(table, key)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\formatting\formatting.py", line 79, in _query_table
return table.fast_slice(key % table.num_rows, 1)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py", line 128, in fast_slice
i = _interpolation_search(self._offsets, offset)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py", line 91, in _interpolation_search
raise IndexError(f"Invalid query '{x}' for size {arr[-1] if len(arr) else 'none'}.")
IndexError: Invalid query '290162' for size 74004228.
```
Tested on Windows, can run on Linux if needed.
EDIT:
It seems like for this to happen, the default NumPy dtype has to be np.int32. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2335/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2334 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2334/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2334/comments | https://api.github.com/repos/huggingface/datasets/issues/2334/events | https://github.com/huggingface/datasets/pull/2334 | 879,810,107 | MDExOlB1bGxSZXF1ZXN0NjMzNTAzNTEw | 2,334 | Updating the DART file checksums in GEM | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@sebastianGehrmann "
] | 1,620,424,424,000 | 1,620,425,890,000 | 1,620,425,890,000 | MEMBER | null | The DART files were just updated on the source GitHub
https://github.com/Yale-LILY/dart/commit/34b3c872da4811523e334f1631e54ca8105dffab | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2334/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2334",
"html_url": "https://github.com/huggingface/datasets/pull/2334",
"diff_url": "https://github.com/huggingface/datasets/pull/2334.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2334.patch",
"merged_at": 1620425890000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2333 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2333/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2333/comments | https://api.github.com/repos/huggingface/datasets/issues/2333/events | https://github.com/huggingface/datasets/pull/2333 | 879,214,067 | MDExOlB1bGxSZXF1ZXN0NjMyOTUwNzIy | 2,333 | Fix duplicate keys | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"- @jplu "
] | 1,620,401,288,000 | 1,620,510,451,000 | 1,620,403,028,000 | MEMBER | null | As noticed in https://github.com/huggingface/datasets/pull/2245, many datasets yield duplicate keys.
Most of the time it was because the counter used for ids were reset at each new data file. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2333/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2333/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2333",
"html_url": "https://github.com/huggingface/datasets/pull/2333",
"diff_url": "https://github.com/huggingface/datasets/pull/2333.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2333.patch",
"merged_at": 1620403028000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2332 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2332/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2332/comments | https://api.github.com/repos/huggingface/datasets/issues/2332/events | https://github.com/huggingface/datasets/pull/2332 | 879,041,608 | MDExOlB1bGxSZXF1ZXN0NjMyNzk1NDE4 | 2,332 | Add note about indices mapping in save_to_disk docstring | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,620,395,382,000 | 1,620,408,048,000 | 1,620,408,048,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2332/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2332",
"html_url": "https://github.com/huggingface/datasets/pull/2332",
"diff_url": "https://github.com/huggingface/datasets/pull/2332.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2332.patch",
"merged_at": 1620408048000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2331/comments | https://api.github.com/repos/huggingface/datasets/issues/2331/events | https://github.com/huggingface/datasets/issues/2331 | 879,031,427 | MDU6SXNzdWU4NzkwMzE0Mjc= | 2,331 | Add Topical-Chat | {
"login": "ktangri",
"id": 22266659,
"node_id": "MDQ6VXNlcjIyMjY2NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/22266659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ktangri",
"html_url": "https://github.com/ktangri",
"followers_url": "https://api.github.com/users/ktangri/followers",
"following_url": "https://api.github.com/users/ktangri/following{/other_user}",
"gists_url": "https://api.github.com/users/ktangri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ktangri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ktangri/subscriptions",
"organizations_url": "https://api.github.com/users/ktangri/orgs",
"repos_url": "https://api.github.com/users/ktangri/repos",
"events_url": "https://api.github.com/users/ktangri/events{/privacy}",
"received_events_url": "https://api.github.com/users/ktangri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [] | 1,620,395,039,000 | 1,620,395,039,000 | null | NONE | null | ## Adding a Dataset
- **Name:** Topical-Chat
- **Description:** a knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners donโt have explicitly defined roles
- **Paper:** https://www.isca-speech.org/archive/Interspeech_2019/pdfs/3079.pdf
- **Data:** https://github.com/alexa/Topical-Chat
- **Motivation:** Good quality, knowledge-grounded dataset that spans a broad range of topics
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2331/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2330 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2330/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2330/comments | https://api.github.com/repos/huggingface/datasets/issues/2330/events | https://github.com/huggingface/datasets/issues/2330 | 878,490,927 | MDU6SXNzdWU4Nzg0OTA5Mjc= | 2,330 | Allow passing `desc` to `tqdm` in `Dataset.map()` | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Hi @lhoestq,\r\nShould we change `desc` in [pbar](https://github.com/huggingface/datasets/blob/81fcf88172ed5e3026ef68aed4c0ec6980372333/src/datasets/arrow_dataset.py#L1860) to something meaningful?",
"I think the user could pass the `desc` parameter to `map` so that it can be displayed in the tqdm progress bar, as suggested by @cccntu.\r\n\r\nWhen there's no multiprocessing, the `desc` of the progress bar could be the `desc` passed by the user.\r\nIn multiprocessing, we were already using a `desc` equal to `\"#\" + str(rank)`.\r\nWe can change it to be `(desc or \"\") + \"#\" + str(rank)` instead.\r\n\r\nIn the end, since both `desc` and `rank` could be None, we can have:\r\n```python\r\npbar_desc = (desc or \"\") + \"#\" + str(rank) if rank is not None else desc\r\n```\r\n\r\nFinally let's remember that if we add `desc` as a new parameter to `map`, we should add it to the `ignore_kwargs` list of the `@fingerprint_transform` decorator of `Dataset._map_single` since we don't want this parameter to affect the fingerprint of the resulting dataset."
] | 1,620,366,774,000 | 1,622,041,161,000 | 1,622,041,161,000 | CONTRIBUTOR | null | It's normal to have many `map()` calls, and some of them can take a few minutes,
it would be nice to have a description on the progress bar.
Alternative solution:
Print the description before/after the `map()` call. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2330/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2330/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2329/comments | https://api.github.com/repos/huggingface/datasets/issues/2329/events | https://github.com/huggingface/datasets/pull/2329 | 877,924,198 | MDExOlB1bGxSZXF1ZXN0NjMxODA3MTk0 | 2,329 | Add cache dir for in-memory datasets | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Yes, having `cache_dir` as an attribute looks cleaner.\r\n\r\n\r\n\r\n",
"Good job! Looking forward to this new feature! ๐ฅ",
"@lhoestq Sorry for the late reply. Yes, I'll start working on tests. Thanks for the detailed explanation of the current issues with caching (like the idea of adding the `use_caching` parameter to `load_dataset`) ",
"@lhoestq Sure. I'm aware this is a high-priority issue to some extent, so feel free to take over.\r\n\r\nFew suggestions I have:\r\n* there is a slight difference between setting `use_caching` to `False` in `load_dataset` and disabling caching globally with `set_caching_enabled(False)` because the former will never execute the following code (`self._cache_dir` is always `False`): \r\nhttps://github.com/huggingface/datasets/blob/c231abdb174987419bbde3360b5b9d6a4672c736/src/datasets/arrow_dataset.py#L1807-L1824\r\n, so I'm just checking whether this is intended (if yes, maybe the docs should mention this) or not?\r\n* think we should add the `use_caching` parameter to every method that has the `keep_in_memory` (and `in_memory` ๐) parameter in its signature for better consistency, but I say let's address this in a separate PR. IMO we need one more PR that will deal exclusively with consistency in the caching logic.",
"Hi @mariosasko \r\nWe discussed internally and we think that this feature might not be the direction we're doing to take for these reasons:\r\n\r\n- it goes against our simple definition of caching: on-disk == uses file cache, and in-memory == nothing is written to disk. I think it adds too much complexity just for a minimal flexibility addition\r\n- there are a few edge cases which are really confusing:\r\n - map on an in memory dataset with a cache_file_name specified by the user -> should the result be in memory or from disk ?\r\n - it would require a special cache directory just for in memory datasets, since they donโt have a preferred directory for caching\r\n- it would break a lot of stuff and would require to rewrite significant parts of the core code and the tests\r\n\r\n\r\nSo in the end we're probably going to close this PR.\r\nLet me know what you think, and thank you anyway for your help on this !",
"Hi,\r\n\r\nI'm fine with that. I agree this adds too much complexity. Btw, I like the idea of reverting default in-memory for small datasets that led to this PR.",
"Superseded by #2460 (to close issue #2458)."
] | 1,620,329,732,000 | 1,623,181,608,000 | 1,623,179,206,000 | CONTRIBUTOR | null | Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq.
Should fix #2322 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2329/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2329",
"html_url": "https://github.com/huggingface/datasets/pull/2329",
"diff_url": "https://github.com/huggingface/datasets/pull/2329.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2329.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2328/comments | https://api.github.com/repos/huggingface/datasets/issues/2328/events | https://github.com/huggingface/datasets/pull/2328 | 877,673,896 | MDExOlB1bGxSZXF1ZXN0NjMxNTg2MzU2 | 2,328 | Add Matthews/Pearson/Spearman correlation metrics | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,620,317,367,000 | 1,620,320,290,000 | 1,620,320,290,000 | MEMBER | null | Added three metrics:
- The Matthews correlation coefficient (from sklearn)
- The Pearson correlation coefficient (from scipy)
- The Spearman correlation coefficient (from scipy)
cc @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2328/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2328",
"html_url": "https://github.com/huggingface/datasets/pull/2328",
"diff_url": "https://github.com/huggingface/datasets/pull/2328.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2328.patch",
"merged_at": 1620320290000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2327 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2327/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2327/comments | https://api.github.com/repos/huggingface/datasets/issues/2327/events | https://github.com/huggingface/datasets/issues/2327 | 877,565,831 | MDU6SXNzdWU4Nzc1NjU4MzE= | 2,327 | A syntax error in example | {
"login": "mymusise",
"id": 6883957,
"node_id": "MDQ6VXNlcjY4ODM5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6883957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mymusise",
"html_url": "https://github.com/mymusise",
"followers_url": "https://api.github.com/users/mymusise/followers",
"following_url": "https://api.github.com/users/mymusise/following{/other_user}",
"gists_url": "https://api.github.com/users/mymusise/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mymusise/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mymusise/subscriptions",
"organizations_url": "https://api.github.com/users/mymusise/orgs",
"repos_url": "https://api.github.com/users/mymusise/repos",
"events_url": "https://api.github.com/users/mymusise/events{/privacy}",
"received_events_url": "https://api.github.com/users/mymusise/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"cc @beurkinger but I think this has been fixed internally and will soon be updated right ?",
"This issue has been fixed."
] | 1,620,311,684,000 | 1,621,479,859,000 | 1,621,479,859,000 | NONE | null | ![image](https://user-images.githubusercontent.com/6883957/117315905-b47a5c00-aeba-11eb-91eb-b2a4a0212a56.png)
Sorry to report with an image, I can't find the template source code of this snippet. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2327/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2326 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2326/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2326/comments | https://api.github.com/repos/huggingface/datasets/issues/2326/events | https://github.com/huggingface/datasets/pull/2326 | 876,829,254 | MDExOlB1bGxSZXF1ZXN0NjMwODk3MjI4 | 2,326 | Enable auto-download for PAN-X / Wikiann domain in XTREME | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,620,248,318,000 | 1,620,376,870,000 | 1,620,376,870,000 | MEMBER | null | This PR replaces the manual download of the `PAN-X.lang` domains with an auto-download from a Dropbox link provided by the Wikiann author. We also add the relevant dummy data for these domains.
While re-generating `dataset_infos.json` I ran into a `KeyError` in the `udpos.Arabic` domain so have included a fix for this as well. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2326/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2326/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2326",
"html_url": "https://github.com/huggingface/datasets/pull/2326",
"diff_url": "https://github.com/huggingface/datasets/pull/2326.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2326.patch",
"merged_at": 1620376870000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2325/comments | https://api.github.com/repos/huggingface/datasets/issues/2325/events | https://github.com/huggingface/datasets/pull/2325 | 876,653,121 | MDExOlB1bGxSZXF1ZXN0NjMwNzU1MzIx | 2,325 | Added the HLGD dataset | {
"login": "tingofurro",
"id": 2609265,
"node_id": "MDQ6VXNlcjI2MDkyNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2609265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tingofurro",
"html_url": "https://github.com/tingofurro",
"followers_url": "https://api.github.com/users/tingofurro/followers",
"following_url": "https://api.github.com/users/tingofurro/following{/other_user}",
"gists_url": "https://api.github.com/users/tingofurro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tingofurro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tingofurro/subscriptions",
"organizations_url": "https://api.github.com/users/tingofurro/orgs",
"repos_url": "https://api.github.com/users/tingofurro/repos",
"events_url": "https://api.github.com/users/tingofurro/events{/privacy}",
"received_events_url": "https://api.github.com/users/tingofurro/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Is there anything else needed from my end?",
"Thanks Bhavitvya and Quentin, this was very streamlined!"
] | 1,620,233,609,000 | 1,620,831,313,000 | 1,620,828,998,000 | CONTRIBUTOR | null | Added the Headline Grouping Dataset (HLGD), from the NAACL2021 paper: News Headline Grouping as a Challenging NLU Task
Dataset Link: https://github.com/tingofurro/headline_grouping
Paper link: https://people.eecs.berkeley.edu/~phillab/pdfs/NAACL2021_HLG.pdf | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2325/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2325",
"html_url": "https://github.com/huggingface/datasets/pull/2325",
"diff_url": "https://github.com/huggingface/datasets/pull/2325.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2325.patch",
"merged_at": 1620828998000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2324 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2324/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2324/comments | https://api.github.com/repos/huggingface/datasets/issues/2324/events | https://github.com/huggingface/datasets/pull/2324 | 876,602,064 | MDExOlB1bGxSZXF1ZXN0NjMwNzE1NTQz | 2,324 | Create Audio feature | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/8",
"html_url": "https://github.com/huggingface/datasets/milestone/8",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels",
"id": 6968069,
"node_id": "MI_kwDODunzps4AalMF",
"number": 8,
"title": "1.12",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 4,
"closed_issues": 2,
"state": "open",
"created_at": 1626881696000,
"updated_at": 1634120793000,
"due_on": 1630306800000,
"closed_at": null
} | [
"For optimal storage, it would be better to:\r\n- store only the audio file path in the cache Arrow file\r\n- perform decoding of the audio file (into audio array and sample rate) on the fly, while loading the dataset from cache (or by adding a convenient `load_audio` function)",
"Thanks a lot @lhoestq for your helpful insights! ๐ค ",
"Just one step before having a first running example to benchmark.\r\n\r\nDecision to make: how to call the function `dataset.features.decode_example`:\r\n- The usual approach until now in speech applications: call it in a subsequent `.map` function\r\n - Pros: multiprocessing can be used out of the box\r\n - Cons: large disk storage required for caching decoded audio files, although having it cached will enhance speed for further usage\r\n- Approach suggested by @lhoestq (see above: https://github.com/huggingface/datasets/pull/2324#discussion_r660758683): doing it in formatting\r\n - Pros: no large disk storage required, as it will be done on the fly while iterating on the dataset\r\n - Cons: it is not cached; need to implement multiprocessing for this case\r\n- Other pros/cons for the previous options?\r\n- Other options?\r\n\r\ncc: @lhoestq @patrickvonplaten @anton-l ",
"@albertvillanova I'm in two minds about this, to be honest. For example, if we consider CommonVoice, which is encoded in lossy `mp3`:\n\n- If we decompress `mp3` into raw `wav` arrays, loading a batch will speed up about 40x.\n- However, a 60gb English mp3 dataset will blow up to about 600gb raw (iirc), which is why loading on-the-fly (optionally?) could be very beneficial as well.",
"Users can do the conversion from mp3 to wav by themselves if they want to using `map`.\r\n\r\nIMO it's better if we can keep the decoding part with the minimal features to be both easy to understand and flexible, i.e. just having the on-the-fly decoding of the audio data (with the sampling rate parameter)\r\n\r\nDecompressing from mp3 to wav sounds like an optimization that depends on the problem that the user wants to solve, the constrains from its environment (disk space, IO speed), and other parameters (optimal training speed for example). Therefore I would leave this to the user to decide whether it has to do it or not.\r\n\r\nLet me know what you think about this",
"@albertvillanova, In my opinion the pros strongly outweigh the cons in the @lhoestq's suggestion which is why I think we should go forward with it. \r\n\r\nThe cons:\r\n- \"the operation won't be cached\" is not to important as the user will most likely access just a couple of audio array to see how it looks like and then for the \"full\" feature extraction she/he will make use of `.map(...)` anyways which means that the result will be cached. \r\n- Regarding the multi-processing - if I understand correctly it'll follow the same logic here -> the user will only access some audio arrays for testing playing around with the model but use `.map(...)` for larger operations where multi-processing would still work as before.\r\n\r\nThe advantages mostly solve the main poinpoints being:\r\n- exploding disk space\r\n- bad user experience since the audio is not loaded on the go\r\n\r\n=> So I'm very much in favor of the \"direct-access\" feature",
"Update: I've retaken this issue.\r\n\r\nIf the decoding logic is implemented when \"examples are accessed\", then if afterwards we use the `.map`, it tries to apply the decoding twice (as maps iterates over the examples, thus \"accessing them\", before trying to apply the map function)...\r\n\r\nI'm thinking on some other approach...",
"I have reimplemented the previous approach, so that we can discuss about it: examples are decoded when accessed.",
"What about creating a new specific formatting, just for decoding? This would be only active within a context manager.",
"Hi @lhoestq, as we discussed, I've followed your suggestion of implementing the decoding step within the formatting logic: extract-decode-format. Feel free to tell me what you think.\r\n\r\n@patrickvonplaten and @anton-l, could you have a look at the use case in the test (https://github.com/huggingface/datasets/pull/2324/files#diff-58e348f6e4deaa5f3119e420a5d48ebb82875a78c28628831748fb54f59b2c78R34-R50) and tell me if this is aligned with your needs? Thanks.",
"Hi @lhoestq, if you validate this approach, we could merge the Audio feature this (or early next) week.",
"Sure it looks nice this way :) Feel free to continue !",
"As discussed, we should pay attention when applying `map` to a dataset with `Audio` feature, in order to avoid decoding the audio data twice.\r\n\r\nOne proposed solution is to pass `input_columns` to `map`. Just, note that the field containing the Audio feature should not be passed in `input_columns` (not possible, for example, to map the audio file path to a new directory).\r\n\r\nI suggest again (3rd time, sorry, lol) using a formatting context manager (as we already use for PyTorch/TensorFlow: https://huggingface.co/docs/datasets/torch_tensorflow.html).\r\n\r\nAbove (https://github.com/huggingface/datasets/pull/2324#issuecomment-915244003), I suggested to define a formatting just for decoding: the decoding of the audio data is only performed if this specific formatting is set (`ds.set_format(\"decoding\")`) or within a context manager (`with ds.formatted_as(\"decoding\"): ...`)\r\n\r\nNow, I would like also to suggest an alternative formatting for **non-decoding** (if decoding is the default behavior), for a use case like this:\r\n```python\r\ndef change_dir(example):\r\n example[\"audio\"] = \"dir/\" + example[\"audio\"]\r\n\r\n\r\nwith ds.formatted_as(\"no_decoding\"):\r\n print(ds[0]) # {\"audio\": \"path/to/file.wav\"}\r\n ds.map(change_dir)\r\n print(ds[0]) # {\"audio\": \"dir/path/to/file.wav\"}\r\n\r\nprint(ds[0]) # {\"audio\": {\"path\": \"dir/path/to/file.wav\", \"array\": np.array([1., 2., 3...]), \"sampling_rate\": 44100}}\r\n```\r\n\r\nPlease, just tell me what you think.\r\nCC: @lhoestq @patrickvonplaten @anton-l ",
"> As discussed, we should pay attention when applying `map` to a dataset with `Audio` feature, in order to avoid decoding the audio data twice.\r\n> \r\n> One proposed solution is to pass `input_columns` to `map`. Just, note that the field containing the Audio feature should not be passed in `input_columns` (not possible, for example, to map the audio file path to a new directory).\r\n> \r\n> I suggest again (3rd time, sorry, lol) using a formatting context manager (as we already use for PyTorch/TensorFlow: https://huggingface.co/docs/datasets/torch_tensorflow.html).\r\n> \r\n> Above ([#2324 (comment)](https://github.com/huggingface/datasets/pull/2324#issuecomment-915244003)), I suggested to define a formatting just for decoding: the decoding of the audio data is only performed if this specific formatting is set (`ds.set_format(\"decoding\")`) or within a context manager (`with ds.formatted_as(\"decoding\"): ...`)\r\n> \r\n> Now, I would like also to suggest an alternative formatting for **non-decoding** (if decoding is the default behavior), for a use case like this:\r\n> \r\n> ```python\r\n> def change_dir(example):\r\n> example[\"audio\"] = \"dir/\" + example[\"audio\"]\r\n> \r\n> \r\n> with ds.formatted_as(\"no_decoding\"):\r\n> print(ds[0]) # {\"audio\": \"path/to/file.wav\"}\r\n> ds.map(change_dir)\r\n> print(ds[0]) # {\"audio\": \"dir/path/to/file.wav\"}\r\n> \r\n> print(ds[0]) # {\"audio\": {\"path\": \"dir/path/to/file.wav\", \"array\": np.array([1., 2., 3...]), \"sampling_rate\": 44100}}\r\n> ```\r\n> \r\n> Please, just tell me what you think.\r\n> CC: @lhoestq @patrickvonplaten @anton-l\r\n\r\nI'm fine with a context manager! There is no way to **not** decode the audio if its key is not accessed no?\r\n\r\nE.g.\r\n\r\n```python\r\ndef load(batch):\r\n batch[\"speech_array\"] = torchaudio.load(batch[\"file\"])\r\n return batch\r\n\r\nds.map(load)\r\n```\r\n\r\ndoes *e.g.* not access the \"audio\" key `batch[\"audio\"}` but there is no way to not decode it without major changes no? \r\n\r\n=> I'm happy with both the context manager and using `input_colmuns`. Both of those solutions are equally good to me if a \"not-access-key-no-decoding\" solution is just not feasible. I let you guys decide :-)",
"> \r\n> There is no way to **not** decode the audio if its key is not accessed no?\r\n> \r\n> E.g...\r\n> \r\n> does _e.g._ not access the \"audio\" key `batch[\"audio\"}` but there is no way to not decode it without major changes no?\r\n\r\n@patrickvonplaten I think therefore we should rethink the implementation of the Audio feature: its goal is to enrich/simplify the user experience when working with audio files. If on the other hand, you see that the current implementation may be problematic/unsatisfying/not-optimal, then we miss the point of creating this feature. This feature should be useful to users, not inconvenient.",
"> > There is no way to **not** decode the audio if its key is not accessed no?\r\n> > E.g...\r\n> > does _e.g._ not access the \"audio\" key `batch[\"audio\"}` but there is no way to not decode it without major changes no?\r\n> \r\n> @patrickvonplaten I think therefore we should rethink the implementation of the Audio feature: its goal is to enrich/simplify the user experience when working with audio files. If on the other hand, you see that the current implementation may be problematic/unsatisfying/not-optimal, then we miss the point of creating this feature. This feature should be useful to users, not inconvenient.\r\n\r\nThanks a lot for the message! I'm discussing a bit with @anton-l at the moment - will share our results as soon as possible",
"Current implementation: see use cases in file https://github.com/huggingface/datasets/blob/0f80e6eaa6f596ff6287eb33587e2d9c69af0e73/tests/features/test_audio.py\r\n\r\nAutomatic decoding:\r\n- when directly accessing an example or a batch\r\n ```python\r\n dset[0]\r\n dset[:2]\r\n ```\r\n- during map, only if audio field is accessed:\r\n ```python\r\n def process_audio_sampling_rate(example):\r\n example[\"double_sampling_rate\"] = 2 * example[\"audio\"][\"sampling_rate\"]\r\n return example\r\n\r\n decoded_dset = dset.map(process_audio_sampling_rate)\r\n ```\r\n\r\nNo automatic decoding:\r\n- during map if audio field is not accessed:\r\n ```python\r\n def process_text(example):\r\n example[\"text\"] = example[\"text\"] + \" World!\"\r\n return example\r\n\r\n decoded_dset = dset.map(process_text)\r\n ```\r\n\r\nThe types of example and batch are kept as usual, `dict[str, Any]` and `dict[str, list[Any]]` respectively.\r\n\r\nCC: @patrickvonplaten @anton-l @lhoestq ",
"That's awesome! Thanks so much for your work on this @albertvillanova!",
"Oh and maybe have a test to make sure that casting the Audio feature to change the sampling rate works as expected ?",
"@lhoestq the test for the resampling is already in place in `test_audio_resampling`: \r\nhttps://github.com/huggingface/datasets/pull/2324/files#diff-58e348f6e4deaa5f3119e420a5d48ebb82875a78c28628831748fb54f59b2c78R48-R56",
"Please note that we should agree in the API: see 53d6d73\r\n\r\nThis is just a proposal implementation:\r\n- Create a new method named `cast_column`, which performs a shallow kind of cast (without using `map()` or caching)\r\n\r\nWe should agree in the name, because as it is, it might be confused with `cast` (and users might think `cast_column` caches the result as `cast`)\r\n\r\nCC: @lhoestq @patrickvonplaten @anton-l ",
"IMO cast and cast_column should have the exact same behavior, to make the experience simple for the user (no distinction between shallow or deep cast).\r\n\r\nMaybe we should change `cast` to use `cast_column` on every column and make `cast_column` use `map` if and only if it's necessary. For Audio for example `map` is not needed.\r\n\r\nWe just need to do some tests to know which casts always need map and which ones don't. This implies either looking at the PyArrow source code (the documentation doesn't mention all these details) or playing with PyArrow to figure it out.\r\n\r\nI guess for now we can just have the simplest `cast_column` which always uses map unless it's an Audio feature type.\r\n\r\nLet me know what you think !",
"@lhoestq I totally agree: `cast` and `cast_column` should be analog to each other.\r\n\r\nFor the implementation, let me try something simpler than the one suggested by you...",
"@lhoestq what do you think of an approach like this 633ef09?\r\n\r\nIf it's OK, then we should implement passing parameters to `cast`.",
"@lhoestq maybe for now we could make a simple implementation and finish this PR. Then we could make a follow-up PR to deal specifically with the optimal implementation of `cast_column` and `cast`, as this issue is not specific to the Audio feature.",
"> @lhoestq what do you think of an approach like this 633ef09?\r\n\r\nYea that's good enough for the time being :)\r\n\r\nI think the last thing we need to do is make sure that `cast_column` changes the fingerprint of the dataset. Feel free to use the `fingerprint_transform` decorator, as for `remove_columns`.\r\n\r\n(note that cast currently doesn't use the decorator since it's based on `map` that already updates the fingerprint).",
"> \r\n> I think the last thing we need to do is make sure that `cast_column` changes the fingerprint of the dataset. Feel free to use the `fingerprint_transform` decorator, as for `remove_columns`.\r\n> \r\n> (note that cast currently doesn't use the decorator since it's based on `map` that already updates the fingerprint).\r\n\r\n@lhoestq note that `cast_column` may call `cast` in some cases, and the decorator would not be necessary for these cases...\r\n- I did it by setting `inplace=False`, and updating fingerprint explicitly only when `cast` is not called.",
"I think current state of this PR could be included in our next release, as experimental feature, for stress testing it and try to find all potential issues. What do you think?\r\n\r\nCC: @lhoestq @patrickvonplaten @anton-l ",
"Looks great! Ready to try it out on the transformers examples after the release :)",
"Think we are good to merge here no? :-)"
] | 1,620,230,122,000 | 1,634,120,793,000 | 1,634,120,793,000 | MEMBER | null | Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distributionโs package manager, for example `sudo apt-get install libsndfile1`.
## Requirements Specification
- Access example with audio loading and resampling:
```python
ds[0]["audio"]
```
- Map with audio loading & resampling:
```python
def preprocess(batch):
batch["input_values"] = processor(batch["audio"]).input_values
return batch
ds = ds.map(preprocess)
```
- Map without audio loading and resampling:
```python
def preprocess(batch):
batch["labels"] = processor(batch["target_text"]).input_values
return batch
ds = ds.map(preprocess)
```
- Additional requirement specification (see https://github.com/huggingface/datasets/pull/2324#pullrequestreview-768864998): Cast audio column to change sampling sate:
```python
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2324/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2324/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2324",
"html_url": "https://github.com/huggingface/datasets/pull/2324",
"diff_url": "https://github.com/huggingface/datasets/pull/2324.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2324.patch",
"merged_at": 1634120793000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2323/comments | https://api.github.com/repos/huggingface/datasets/issues/2323/events | https://github.com/huggingface/datasets/issues/2323 | 876,438,507 | MDU6SXNzdWU4NzY0Mzg1MDc= | 2,323 | load_dataset("timit_asr") gives back duplicates of just one sample text | {
"login": "ekeleshian",
"id": 33647474,
"node_id": "MDQ6VXNlcjMzNjQ3NDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/33647474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ekeleshian",
"html_url": "https://github.com/ekeleshian",
"followers_url": "https://api.github.com/users/ekeleshian/followers",
"following_url": "https://api.github.com/users/ekeleshian/following{/other_user}",
"gists_url": "https://api.github.com/users/ekeleshian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ekeleshian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekeleshian/subscriptions",
"organizations_url": "https://api.github.com/users/ekeleshian/orgs",
"repos_url": "https://api.github.com/users/ekeleshian/repos",
"events_url": "https://api.github.com/users/ekeleshian/events{/privacy}",
"received_events_url": "https://api.github.com/users/ekeleshian/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Upgrading datasets to version 1.6 fixes the issue",
"This bug was fixed in #1995. Upgrading the `datasets` should work! ",
"Thanks @ekeleshian for having reported.\r\n\r\nI am closing this issue once that you updated `datasets`. Feel free to reopen it if the problem persists."
] | 1,620,220,488,000 | 1,620,383,550,000 | 1,620,383,550,000 | NONE | null | ## Describe the bug
When you look up on key ["train"] and then ['text'], you get back a list with just one sentence duplicated 4620 times. Namely, the sentence "Would such an act of refusal be useful?". Similarly when you look up ['test'] and then ['text'], the list is one sentence repeated "The bungalow was pleasantly situated near the shore." 1680 times.
I tried to work around the issue by downgrading to datasets version 1.3.0, inspired by [this post](https://www.gitmemory.com/issue/huggingface/datasets/2052/798904836) and removing the entire huggingface directory from ~/.cache, but I still get the same issue.
## Steps to reproduce the bug
```python
from datasets import load_dataset
timit = load_dataset("timit_asr")
print(timit['train']['text'])
print(timit['test']['text'])
```
## Expected Result
Rows of diverse text, like how it is shown in the [wav2vec2.0 tutorial](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb)
<img width="485" alt="Screen Shot 2021-05-05 at 9 09 57 AM" src="https://user-images.githubusercontent.com/33647474/117146094-d9b77f00-ad81-11eb-8306-f281850c127a.png">
## Actual results
Rows of repeated text.
<img width="319" alt="Screen Shot 2021-05-05 at 9 11 53 AM" src="https://user-images.githubusercontent.com/33647474/117146231-f8b61100-ad81-11eb-834a-fc10410b0c9c.png">
## Versions
- Datasets: 1.3.0
- Python: 3.9.1
- Platform: macOS-11.2.1-x86_64-i386-64bit}
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2323/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2322/comments | https://api.github.com/repos/huggingface/datasets/issues/2322/events | https://github.com/huggingface/datasets/issues/2322 | 876,383,853 | MDU6SXNzdWU4NzYzODM4NTM= | 2,322 | Calls to map are not cached. | {
"login": "villmow",
"id": 2743060,
"node_id": "MDQ6VXNlcjI3NDMwNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/villmow",
"html_url": "https://github.com/villmow",
"followers_url": "https://api.github.com/users/villmow/followers",
"following_url": "https://api.github.com/users/villmow/following{/other_user}",
"gists_url": "https://api.github.com/users/villmow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/villmow/subscriptions",
"organizations_url": "https://api.github.com/users/villmow/orgs",
"repos_url": "https://api.github.com/users/villmow/repos",
"events_url": "https://api.github.com/users/villmow/events{/privacy}",
"received_events_url": "https://api.github.com/users/villmow/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I tried upgrading to `datasets==1.6.2` and downgrading to `1.6.0`. Both versions produce the same output.\r\n\r\nDowngrading to `1.5.0` works and produces the following output for me:\r\n\r\n```bash\r\nDownloading: 9.20kB [00:00, 3.94MB/s] \r\nDownloading: 5.99kB [00:00, 3.29MB/s] \r\nNo config specified, defaulting to: sst/default\r\nDownloading and preparing dataset sst/default (download: 6.83 MiB, generated: 3.73 MiB, post-processed: Unknown size, total: 10.56 MiB) to /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b...\r\n Dataset sst downloaded and prepared to /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b. Subsequent calls will reuse this data.\r\nexecuted [0, 1]\r\n#0: 0%| | 0/5 [00:00<?, ?ba/s]\r\n#1: 0%| | 0/5 [00:00<?, ?ba/s]\r\nexecuted [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\r\nexecuted [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281]\r\nexecuted [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]\r\nexecuted [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281]\r\nexecuted [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009]\r\nexecuted [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281]\r\nexecuted [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009]\r\nexecuted [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281]\r\nexecuted [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009]\r\n#0: 100%|โโโโโโโโโโ| 5/5 [00:00<00:00, 94.83ba/s]\r\nexecuted [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281]\r\n#1: 100%|โโโโโโโโโโ| 5/5 [00:00<00:00, 92.75ba/s]\r\nexecuted [0, 1]\r\n#0: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#1: 0%| | 0/1 [00:00<?, ?ba/s]\r\nexecuted [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\r\nexecuted [551, 552, 553, 554, 555, 556, 557, 558, 559, 560]\r\n#0: 100%|โโโโโโโโโโ| 1/1 [00:00<00:00, 118.81ba/s]\r\n#1: 100%|โโโโโโโโโโ| 1/1 [00:00<00:00, 123.06ba/s]\r\nexecuted [0, 1]\r\n#0: 0%| | 0/2 [00:00<?, ?ba/s]\r\n#1: 0%| | 0/2 [00:00<?, ?ba/s]\r\nexecuted [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\r\nexecuted [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114]\r\nexecuted [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]\r\n#0: 100%|โโโโโโโโโโ| 2/2 [00:00<00:00, 119.42ba/s]\r\nexecuted [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114]\r\n#1: 100%|โโโโโโโโโโ| 2/2 [00:00<00:00, 123.33ba/s]\r\n\r\n\r\n\r\n ############################## \r\n\r\n\r\n\r\nexecuted [0, 1]\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-6079777aa097c8f8.arrow\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-2dc05c46f68eda6e.arrow\r\nexecuted [0, 1]\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-1ca347e7430b98f1.arrow\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-c0f1a73ce3ba40cd.arrow\r\nexecuted [0, 1]\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-832a1407bf1ac5b7.arrow\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-036316a259b773c4.arrow\r\n- Datasets: 1.5.0\r\n- Python: 3.8.3 (default, May 19 2020, 18:47:26) \r\n[GCC 7.3.0]\r\n- Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.10\r\n```",
"Hi,\r\n\r\nset `keep_in_memory` to False when loading a dataset (`sst = load_dataset(\"sst\", keep_in_memory=False)`) to prevent it from loading in-memory. Currently, in-memory datasets fail to find cached files due to this check (always False for them):\r\n\r\nhttps://github.com/huggingface/datasets/blob/241a0b4a3a868778ee91e767ad406f9da7610df2/src/datasets/arrow_dataset.py#L1718\r\n\r\n@albertvillanova It seems like this behavior was overlooked in #2182.\r\n\r\n",
"Hi @villmow, thanks for reporting. \r\n\r\nAs @mariosasko has pointed out, we did not consider this case when introducing the feature of automatic in-memory for small datasets. This needs to be fixed.",
"Hi ! Currently a dataset that is in memory doesn't know doesn't know in which directory it has to read/write cache files.\r\nOn the other hand, a dataset that loaded from the disk (via memory mapping) uses the directory from which the dataset is located to read/write cache files.\r\n\r\nBecause of that, currently in-memory datasets simply don't use caching.\r\n\r\nMaybe a Dataset object could have a `cache_dir` that is set to the directory where the arrow files are created during `load_dataset` ?",
"Fixed once reverted the default in-memory feature:\r\nClosed by #2460 (to close issue #2458).",
"Please @villmow, feel free to update to `Datasets` latest version (1.8)."
] | 1,620,216,687,000 | 1,623,179,402,000 | 1,623,179,301,000 | NONE | null | ## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):
print("executed", i[:10])
return samples
# first call
x = sst.map(foo, batched=True, with_indices=True, num_proc=2)
print('\n'*3, "#" * 30, '\n'*3)
# second call
y = sst.map(foo, batched=True, with_indices=True, num_proc=2)
# print version
import sys
import platform
print(f"""
- Datasets: {datasets.__version__}
- Python: {sys.version}
- Platform: {platform.platform()}
""")
```
## Actual results
This code prints the following output for me:
```bash
No config specified, defaulting to: sst/default
Reusing dataset sst (/home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/b8a7889ef01c5d3ae8c379b84cc4080f8aad3ac2bc538701cbe0ac6416fb76ff)
#0: 0%| | 0/5 [00:00<?, ?ba/s]
#1: 0%| | 0/5 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
executed [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281]
executed [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009]
executed [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281]
executed [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009]
executed [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281]
executed [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009]
#0: 100%|โโโโโโโโโโ| 5/5 [00:00<00:00, 59.85ba/s]
executed [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281]
#1: 100%|โโโโโโโโโโ| 5/5 [00:00<00:00, 60.85ba/s]
#0: 0%| | 0/1 [00:00<?, ?ba/s]
#1: 0%| | 0/1 [00:00<?, ?ba/s]executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
#0: 100%|โโโโโโโโโโ| 1/1 [00:00<00:00, 69.32ba/s]
executed [551, 552, 553, 554, 555, 556, 557, 558, 559, 560]
#1: 100%|โโโโโโโโโโ| 1/1 [00:00<00:00, 70.93ba/s]
#0: 0%| | 0/2 [00:00<?, ?ba/s]
#1: 0%| | 0/2 [00:00<?, ?ba/s]executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
#0: 100%|โโโโโโโโโโ| 2/2 [00:00<00:00, 63.25ba/s]
executed [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114]
executed [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114]
#1: 100%|โโโโโโโโโโ| 2/2 [00:00<00:00, 57.69ba/s]
##############################
#0: 0%| | 0/5 [00:00<?, ?ba/s]
#1: 0%| | 0/5 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
executed [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281]
executed [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009]
executed [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281]
executed [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009]
executed [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009]
#0: 100%|โโโโโโโโโโ| 5/5 [00:00<00:00, 58.10ba/s]
executed [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281]
executed [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281]
#1: 100%|โโโโโโโโโโ| 5/5 [00:00<00:00, 57.19ba/s]
#0: 0%| | 0/1 [00:00<?, ?ba/s]
#1: 0%| | 0/1 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
#0: 100%|โโโโโโโโโโ| 1/1 [00:00<00:00, 60.10ba/s]
executed [551, 552, 553, 554, 555, 556, 557, 558, 559, 560]
#1: 100%|โโโโโโโโโโ| 1/1 [00:00<00:00, 53.82ba/s]
#0: 0%| | 0/2 [00:00<?, ?ba/s]
#1: 0%| | 0/2 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
executed [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114]
#0: 100%|โโโโโโโโโโ| 2/2 [00:00<00:00, 72.76ba/s]
executed [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114]
#1: 100%|โโโโโโโโโโ| 2/2 [00:00<00:00, 71.55ba/s]
- Datasets: 1.6.1
- Python: 3.8.3 (default, May 19 2020, 18:47:26)
[GCC 7.3.0]
- Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.10
```
## Expected results
Caching should work.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2322/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2321 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2321/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2321/comments | https://api.github.com/repos/huggingface/datasets/issues/2321/events | https://github.com/huggingface/datasets/pull/2321 | 876,304,364 | MDExOlB1bGxSZXF1ZXN0NjMwNDc3NDUy | 2,321 | Set encoding in OSCAR dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,620,210,423,000 | 1,620,211,855,000 | 1,620,211,855,000 | MEMBER | null | Set explicit `utf-8` encoding in OSCAR dataset, to avoid using the system default `cp1252` on Windows platforms.
Fix #2319. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2321/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2321",
"html_url": "https://github.com/huggingface/datasets/pull/2321",
"diff_url": "https://github.com/huggingface/datasets/pull/2321.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2321.patch",
"merged_at": 1620211854000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2320/comments | https://api.github.com/repos/huggingface/datasets/issues/2320/events | https://github.com/huggingface/datasets/pull/2320 | 876,257,026 | MDExOlB1bGxSZXF1ZXN0NjMwNDM5NjI5 | 2,320 | Set default name in init_dynamic_modules | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,620,207,003,000 | 1,620,287,874,000 | 1,620,287,874,000 | MEMBER | null | Set default value for the name of dynamic modules.
Close #2318. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2320/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2320",
"html_url": "https://github.com/huggingface/datasets/pull/2320",
"diff_url": "https://github.com/huggingface/datasets/pull/2320.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2320.patch",
"merged_at": 1620287874000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2319/comments | https://api.github.com/repos/huggingface/datasets/issues/2319/events | https://github.com/huggingface/datasets/issues/2319 | 876,251,376 | MDU6SXNzdWU4NzYyNTEzNzY= | 2,319 | UnicodeDecodeError for OSCAR (Afrikaans) | {
"login": "sgraaf",
"id": 8904453,
"node_id": "MDQ6VXNlcjg5MDQ0NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8904453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgraaf",
"html_url": "https://github.com/sgraaf",
"followers_url": "https://api.github.com/users/sgraaf/followers",
"following_url": "https://api.github.com/users/sgraaf/following{/other_user}",
"gists_url": "https://api.github.com/users/sgraaf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgraaf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgraaf/subscriptions",
"organizations_url": "https://api.github.com/users/sgraaf/orgs",
"repos_url": "https://api.github.com/users/sgraaf/repos",
"events_url": "https://api.github.com/users/sgraaf/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgraaf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @sgraaf.\r\n\r\nI am going to have a look at it. \r\n\r\nI guess the expected codec is \"UTF-8\". Normally, when no explicitly codec is passed, Python uses one which is platform-dependent. For Linux machines, the default codec is `utf_8`, which is OK. However for Windows machine, the default codec is `cp1252`, which causes the problem.",
"Awesome, thank you. ๐ ",
"@sgraaf, I have just merged the fix in the master branch.\r\n\r\nYou can either:\r\n- install `datasets` from source code\r\n- wait until we make the next release of `datasets`\r\n- set the `utf-8` codec as your default instead of `cp1252`. This can be done by activating the Python [UTF-8 mode](https://www.python.org/dev/peps/pep-0540) either by passing the command-line option `-X utf8` or by setting the environment variable `PYTHONUTF8=1`."
] | 1,620,206,572,000 | 1,620,212,251,000 | 1,620,211,855,000 | NONE | null | ## Describe the bug
When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
```
## Expected results
Anything but an error, really.
## Actual results
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
Downloading: 14.7kB [00:00, 4.91MB/s]
Downloading: 3.07MB [00:00, 32.6MB/s]
Downloading and preparing dataset oscar/unshuffled_deduplicated_af (download: 62.93 MiB, generated: 163.38 MiB, post-processed: Unknown size, total: 226.32 MiB) to C:\Users\sgraaf\.cache\huggingface\datasets\oscar\unshuffled_deduplicated_af\1.0.0\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464...
Downloading: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 81.0/81.0 [00:00<00:00, 40.5kB/s]
Downloading: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 66.0M/66.0M [00:18<00:00, 3.50MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\load.py", line 745, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 574, in download_and_prepare
self._download_and_prepare(
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 652, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 979, in _prepare_split
for key, record in utils.tqdm(
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\tqdm\std.py", line 1133, in __iter__
for obj in iterable:
File "C:\Users\sgraaf\.cache\huggingface\modules\datasets_modules\datasets\oscar\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464\oscar.py", line 359, in _generate_examples
for line in f:
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 7454: character maps to <undefined>
```
## Versions
Paste the output of the following code:
```python
import datasets
import sys
import platform
print(f"""
- Datasets: {datasets.__version__}
- Python: {sys.version}
- Platform: {platform.platform()}
""")
```
- Datasets: 1.6.2
- Python: 3.9.4 (tags/v3.9.4:1f2e308, Apr 6 2021, 13:40:21) [MSC v.1928 64 bit (AMD64)]
- Platform: Windows-10-10.0.19041-SP0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2319/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2318 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2318/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2318/comments | https://api.github.com/repos/huggingface/datasets/issues/2318/events | https://github.com/huggingface/datasets/issues/2318 | 876,212,460 | MDU6SXNzdWU4NzYyMTI0NjA= | 2,318 | [api request] API to obtain "dataset_module" dynamic path? | {
"login": "richardliaw",
"id": 4529381,
"node_id": "MDQ6VXNlcjQ1MjkzODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4529381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richardliaw",
"html_url": "https://github.com/richardliaw",
"followers_url": "https://api.github.com/users/richardliaw/followers",
"following_url": "https://api.github.com/users/richardliaw/following{/other_user}",
"gists_url": "https://api.github.com/users/richardliaw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richardliaw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richardliaw/subscriptions",
"organizations_url": "https://api.github.com/users/richardliaw/orgs",
"repos_url": "https://api.github.com/users/richardliaw/repos",
"events_url": "https://api.github.com/users/richardliaw/events{/privacy}",
"received_events_url": "https://api.github.com/users/richardliaw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @richardliaw, \r\n\r\nFirst, thanks for the compliments.\r\n\r\nIn relation with your request, currently, the dynamic modules path is obtained this way:\r\n```python\r\nfrom datasets.load import init_dynamic_modules, MODULE_NAME_FOR_DYNAMIC_MODULES\r\n\r\ndynamic_modules_path = init_dynamic_modules(MODULE_NAME_FOR_DYNAMIC_MODULES)\r\n```\r\n\r\nLet me know if it is OK for you this way. \r\n\r\nI could set `MODULE_NAME_FOR_DYNAMIC_MODULES` as default value, so that you could instead obtain the path with:\r\n```\r\ndynamic_modules_path = datasets.load.init_dynamic_modules()\r\n```",
"Hi @albertvillanova, the default value proposal seems great :) Looking forward to this!",
"I like the idea as well ! thanks @albertvillanova ",
"Hi @richardliaw, the feature is on the master branch and will be included in the next release in a couple of weeks.",
"awesome work @albertvillanova !"
] | 1,620,204,048,000 | 1,620,290,745,000 | 1,620,287,874,000 | NONE | null | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
This is an awesome library.
It seems like the dynamic module path in this library has broken some of hyperparameter tuning functionality: https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/34
This is because Ray will spawn new processes, and each process will load modules by path. However, we need to explicitly inform Ray to load the right modules, or else it will error upon import.
I'd like an API to obtain the dynamic paths. This will allow us to support this functionality in this awesome library while being future proof.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
`datasets.get_dynamic_paths -> List[str]` will be sufficient for my use case.
By offering this API, we will be able to address the following issues (by patching the ray integration sufficiently):
https://github.com/huggingface/blog/issues/106
https://github.com/huggingface/transformers/issues/11565
https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/34
https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/35
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2318/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2317 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2317/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2317/comments | https://api.github.com/repos/huggingface/datasets/issues/2317/events | https://github.com/huggingface/datasets/pull/2317 | 875,767,318 | MDExOlB1bGxSZXF1ZXN0NjMwMDQxNzc4 | 2,317 | Fix incorrect version specification for the pyarrow package | {
"login": "cemilcengiz",
"id": 32267027,
"node_id": "MDQ6VXNlcjMyMjY3MDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/32267027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cemilcengiz",
"html_url": "https://github.com/cemilcengiz",
"followers_url": "https://api.github.com/users/cemilcengiz/followers",
"following_url": "https://api.github.com/users/cemilcengiz/following{/other_user}",
"gists_url": "https://api.github.com/users/cemilcengiz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cemilcengiz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cemilcengiz/subscriptions",
"organizations_url": "https://api.github.com/users/cemilcengiz/orgs",
"repos_url": "https://api.github.com/users/cemilcengiz/repos",
"events_url": "https://api.github.com/users/cemilcengiz/events{/privacy}",
"received_events_url": "https://api.github.com/users/cemilcengiz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,620,156,620,000 | 1,620,209,356,000 | 1,620,206,518,000 | CONTRIBUTOR | null | This PR addresses the bug in the pyarrow version specification, which is detailed in #2316 .
Simply, I put a comma between the version bounds.
Fix #2316. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2317/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2317/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2317",
"html_url": "https://github.com/huggingface/datasets/pull/2317",
"diff_url": "https://github.com/huggingface/datasets/pull/2317.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2317.patch",
"merged_at": 1620206518000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2316 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2316/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2316/comments | https://api.github.com/repos/huggingface/datasets/issues/2316/events | https://github.com/huggingface/datasets/issues/2316 | 875,756,353 | MDU6SXNzdWU4NzU3NTYzNTM= | 2,316 | Incorrect version specification for pyarrow | {
"login": "cemilcengiz",
"id": 32267027,
"node_id": "MDQ6VXNlcjMyMjY3MDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/32267027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cemilcengiz",
"html_url": "https://github.com/cemilcengiz",
"followers_url": "https://api.github.com/users/cemilcengiz/followers",
"following_url": "https://api.github.com/users/cemilcengiz/following{/other_user}",
"gists_url": "https://api.github.com/users/cemilcengiz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cemilcengiz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cemilcengiz/subscriptions",
"organizations_url": "https://api.github.com/users/cemilcengiz/orgs",
"repos_url": "https://api.github.com/users/cemilcengiz/repos",
"events_url": "https://api.github.com/users/cemilcengiz/events{/privacy}",
"received_events_url": "https://api.github.com/users/cemilcengiz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Fixed by #2317."
] | 1,620,155,711,000 | 1,620,209,403,000 | 1,620,209,403,000 | CONTRIBUTOR | null | ## Describe the bug
The pyarrow dependency is incorrectly specified in setup.py file, in [this line](https://github.com/huggingface/datasets/blob/3a3e5a4da20bfcd75f8b6a6869b240af8feccc12/setup.py#L77).
Also as a snippet:
```python
"pyarrow>=1.0.0<4.0.0",
```
## Steps to reproduce the bug
```bash
pip install "pyarrow>=1.0.0<4.0.0"
```
## Expected results
It is expected to get a pyarrow version between 1.0.0 (inclusive) and 4.0.0 (exclusive).
## Actual results
pip ignores the specified versions since there is a missing comma between the lower and upper limits. Therefore, pip installs the latest pyarrow version from PYPI, which is 4.0.0.
This is especially problematic since "conda env export" fails due to incorrect version specification. Here is the conda error as well:
```bash
conda env export
InvalidVersionSpec: Invalid version '1.0.0<4.0.0': invalid character(s)
```
## Fix suggestion
Put a comma between the version limits which means replacing the line in setup.py file with the following:
```python
"pyarrow>=1.0.0,<4.0.0",
```
## Versions
Paste the output of the following code:
```python
- Datasets: 1.6.2
- Python: 3.7.10 (default, Feb 26 2021, 18:47:35)
[GCC 7.3.0]
- Platform: Linux-5.4.0-42-generic-x86_64-with-debian-buster-sid
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2316/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2316/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2315 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2315/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2315/comments | https://api.github.com/repos/huggingface/datasets/issues/2315/events | https://github.com/huggingface/datasets/pull/2315 | 875,742,200 | MDExOlB1bGxSZXF1ZXN0NjMwMDIyMDYy | 2,315 | Datasets cli improvements | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Additionally, I've deleted the points that are not very relevant for this repo (I guess the deleted points originate from the transformers repo). With this change, running `datasets-cli` is identical to copy-pasting the code from `bug_report.md`, but is more elegant because it doesn't require launching the REPL and copy-pasting the code. "
] | 1,620,154,511,000 | 1,620,664,611,000 | 1,620,664,610,000 | CONTRIBUTOR | null | This PR:
* replaces the code from the `bug_report.md` that was used to get relevant system info with a dedicated command (a more elegant approach than copy-pasting the code IMO)
* removes the `download` command (copied from the transformers repo?)
* adds missing help messages to the cli commands
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2315/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2315",
"html_url": "https://github.com/huggingface/datasets/pull/2315",
"diff_url": "https://github.com/huggingface/datasets/pull/2315.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2315.patch",
"merged_at": 1620664610000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2314 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2314/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2314/comments | https://api.github.com/repos/huggingface/datasets/issues/2314/events | https://github.com/huggingface/datasets/pull/2314 | 875,729,271 | MDExOlB1bGxSZXF1ZXN0NjMwMDExODc4 | 2,314 | Minor refactor prepare_module | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq this is the PR that I mentioned to you, which can be considered as a first step in refactoring `prepare_module`.",
"closing in favor of #2986 "
] | 1,620,153,446,000 | 1,634,116,054,000 | 1,634,116,054,000 | MEMBER | null | Start to refactor `prepare_module` to try to decouple functionality.
This PR does:
- extract function `_initialize_dynamic_modules_namespace_package`
- extract function `_find_module_in_github_or_s3`
- some renaming of variables
- use of f-strings | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2314/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2314",
"html_url": "https://github.com/huggingface/datasets/pull/2314",
"diff_url": "https://github.com/huggingface/datasets/pull/2314.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2314.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2313 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2313/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2313/comments | https://api.github.com/repos/huggingface/datasets/issues/2313/events | https://github.com/huggingface/datasets/pull/2313 | 875,475,367 | MDExOlB1bGxSZXF1ZXN0NjI5ODEwNTc4 | 2,313 | Remove unused head_hf_s3 function | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,620,135,726,000 | 1,620,379,902,000 | 1,620,379,902,000 | MEMBER | null | Currently, the function `head_hf_s3` is not used:
- neither its returned result is used
- nor it raises any exception, as exceptions are catched and returned (not raised)
This PR removes it. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2313/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2313",
"html_url": "https://github.com/huggingface/datasets/pull/2313",
"diff_url": "https://github.com/huggingface/datasets/pull/2313.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2313.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2312 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2312/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2312/comments | https://api.github.com/repos/huggingface/datasets/issues/2312/events | https://github.com/huggingface/datasets/pull/2312 | 875,435,726 | MDExOlB1bGxSZXF1ZXN0NjI5Nzc4NjUz | 2,312 | Add rename_columnS method | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Merging then ๐ "
] | 1,620,133,073,000 | 1,620,135,793,000 | 1,620,135,792,000 | CONTRIBUTOR | null | Cherry-picked from #2255 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2312/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2312",
"html_url": "https://github.com/huggingface/datasets/pull/2312",
"diff_url": "https://github.com/huggingface/datasets/pull/2312.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2312.patch",
"merged_at": 1620135792000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2311 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2311/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2311/comments | https://api.github.com/repos/huggingface/datasets/issues/2311/events | https://github.com/huggingface/datasets/pull/2311 | 875,262,208 | MDExOlB1bGxSZXF1ZXN0NjI5NjQwNTMx | 2,311 | Add SLR52, SLR53 and SLR54 to OpenSLR | {
"login": "cahya-wirawan",
"id": 7669893,
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cahya-wirawan",
"html_url": "https://github.com/cahya-wirawan",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq , I am not sure about the error message:\r\n```\r\n#!/bin/bash -eo pipefail\r\n./scripts/datasets_metadata_validator.py\r\nWARNING:root:โ Failed to validate 'datasets/openslr/README.md':\r\n__init__() got an unexpected keyword argument 'SLR32'\r\nINFO:root:โ Failed on 1 files.\r\n\r\nExited with code exit status 1\r\nCircleCI received exit code 1 \r\n```\r\nCould you have a look please? Thanks.",
"Hi ! The error is unrelated to your PR and has been fixed on master\r\nNext time feel free to merge master into your branch to fix the CI error ;)"
] | 1,620,119,283,000 | 1,620,381,055,000 | 1,620,381,055,000 | CONTRIBUTOR | null | Add large speech datasets for Sinhala, Bengali and Nepali. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2311/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2311",
"html_url": "https://github.com/huggingface/datasets/pull/2311",
"diff_url": "https://github.com/huggingface/datasets/pull/2311.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2311.patch",
"merged_at": 1620381055000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2310/comments | https://api.github.com/repos/huggingface/datasets/issues/2310/events | https://github.com/huggingface/datasets/pull/2310 | 875,096,051 | MDExOlB1bGxSZXF1ZXN0NjI5NTEwNTg5 | 2,310 | Update README.md | {
"login": "cryoff",
"id": 15029054,
"node_id": "MDQ6VXNlcjE1MDI5MDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/15029054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cryoff",
"html_url": "https://github.com/cryoff",
"followers_url": "https://api.github.com/users/cryoff/followers",
"following_url": "https://api.github.com/users/cryoff/following{/other_user}",
"gists_url": "https://api.github.com/users/cryoff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cryoff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cryoff/subscriptions",
"organizations_url": "https://api.github.com/users/cryoff/orgs",
"repos_url": "https://api.github.com/users/cryoff/repos",
"events_url": "https://api.github.com/users/cryoff/events{/privacy}",
"received_events_url": "https://api.github.com/users/cryoff/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @cryoff, thanks for completing the dataset card.\r\n\r\nNow there is an automatic validation tool to assure that all dataset cards contain all the relevant information. This is the cause of the non-passing test on your Pull Request:\r\n```\r\nFound fields that are not non-empty list of strings: {'annotations_creators': [], 'language_creators': []}\r\n```"
] | 1,620,103,081,000 | 1,620,110,159,000 | null | CONTRIBUTOR | null | Provides description of data instances and dataset features | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2310/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2310",
"html_url": "https://github.com/huggingface/datasets/pull/2310",
"diff_url": "https://github.com/huggingface/datasets/pull/2310.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2310.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2309 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2309/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2309/comments | https://api.github.com/repos/huggingface/datasets/issues/2309/events | https://github.com/huggingface/datasets/pull/2309 | 874,644,990 | MDExOlB1bGxSZXF1ZXN0NjI5MTU4NjQx | 2,309 | Fix conda release | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,620,053,579,000 | 1,620,057,677,000 | 1,620,057,677,000 | MEMBER | null | There were a few issues with conda releases (they've been failing for a while now).
To fix this I had to:
- add the --single-version-externally-managed tag to the build stage (suggestion from [here](https://stackoverflow.com/a/64825075))
- set the python version of the conda build stage to 3.8 since 3.9 isn't supported
- sync the evrsion requirement of `huggingface_hub`
With these changes I'm working on uploading all missing versions until 1.6.2 to conda
EDIT: I managed to build and upload all missing versions until 1.6.2 to conda :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2309/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2309",
"html_url": "https://github.com/huggingface/datasets/pull/2309",
"diff_url": "https://github.com/huggingface/datasets/pull/2309.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2309.patch",
"merged_at": 1620057677000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2302 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2302/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2302/comments | https://api.github.com/repos/huggingface/datasets/issues/2302/events | https://github.com/huggingface/datasets/pull/2302 | 873,961,435 | MDExOlB1bGxSZXF1ZXN0NjI4NjIzMDQ3 | 2,302 | Add SubjQA dataset | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I'm not sure why the windows test fails, but looking at the logs it looks like some caching issue on one of the metrics ... maybe re-run and ๐ค ?",
"Hi @lewtun, thanks for adding this dataset!\r\n\r\nIf the dataset is going to be referenced heavily, I think it's worth spending some time to make the dataset card really great :) To start, the information that is currently in the `Data collection` paragraph should probably be organized in the `Dataset Creation` section.\r\n\r\nHere's a link to the [relevant section of the guide](https://github.com/huggingface/datasets/blob/master/templates/README_guide.md#dataset-creation), let me know if you have any questions!",
"> If the dataset is going to be referenced heavily, I think it's worth spending some time to make the dataset card really great :) To start, the information that is currently in the `Data collection` paragraph should probably be organized in the `Dataset Creation` section.\r\n\r\ngreat idea @yjernite! i've added some extra information / moved things as you suggest and will wrap up the rest tomorrow :)",
"hi @yjernite and @lhoestq, i've fleshed out the dataset card and think this is now ready for another round of review!"
] | 1,619,967,080,000 | 1,620,638,479,000 | 1,620,638,479,000 | MEMBER | null | Hello datasetters ๐!
Here's an interesting dataset about extractive question-answering on _subjective_ product / restaurant reviews. It's quite challenging for models fine-tuned on SQuAD and provides a nice example of domain adaptation (i.e. fine-tuning a SQuAD model on this domain gives better performance).
I found a bug in the start/end indices that I've proposed a fix for here: https://github.com/megagonlabs/SubjQA/pull/2
Unfortunately, the dataset creators are unresponsive, so for now I am using my fork as the source. Will update the URL if/when the creators respond. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2302/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2302/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2302",
"html_url": "https://github.com/huggingface/datasets/pull/2302",
"diff_url": "https://github.com/huggingface/datasets/pull/2302.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2302.patch",
"merged_at": 1620638479000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2301/comments | https://api.github.com/repos/huggingface/datasets/issues/2301/events | https://github.com/huggingface/datasets/issues/2301 | 873,941,266 | MDU6SXNzdWU4NzM5NDEyNjY= | 2,301 | Unable to setup dev env on Windows | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @gchhablani, \r\n\r\nThere are some 3rd-party dependencies that require to build code in C. In this case, it is the library `python-Levenshtein`.\r\n\r\nOn Windows, in order to be able to build C code, you need to install at least `Microsoft C++ Build Tools` version 14. You can find more info here: https://visualstudio.microsoft.com/visual-cpp-build-tools/",
"Hi @albertvillanova \r\n\r\nSorry for such a trivial issue ;-; \r\n\r\nThanks a lot."
] | 1,619,961,642,000 | 1,620,055,081,000 | 1,620,055,054,000 | CONTRIBUTOR | null | Hi
I tried installing the `".[dev]"` version on Windows 10 after cloning.
Here is the error I'm facing:
```bat
(env) C:\testing\datasets>pip install -e ".[dev]"
Obtaining file:///C:/testing/datasets
Requirement already satisfied: numpy>=1.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.19.5)
Collecting pyarrow>=0.17.1
Using cached pyarrow-4.0.0-cp37-cp37m-win_amd64.whl (13.3 MB)
Requirement already satisfied: dill in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.3.1.1)
Collecting pandas
Using cached pandas-1.2.4-cp37-cp37m-win_amd64.whl (9.1 MB)
Requirement already satisfied: requests>=2.19.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2.25.1)
Requirement already satisfied: tqdm<4.50.0,>=4.27 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.49.0)
Requirement already satisfied: xxhash in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2.0.2)
Collecting multiprocess
Using cached multiprocess-0.70.11.1-py37-none-any.whl (108 kB)
Requirement already satisfied: fsspec in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2021.4.0)
Collecting huggingface_hub<0.1.0
Using cached huggingface_hub-0.0.8-py3-none-any.whl (34 kB)
Requirement already satisfied: importlib_metadata in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.0.1)
Requirement already satisfied: absl-py in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.12.0)
Requirement already satisfied: pytest in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (6.2.3)
Collecting pytest-xdist
Using cached pytest_xdist-2.2.1-py3-none-any.whl (37 kB)
Collecting apache-beam>=2.24.0
Using cached apache_beam-2.29.0-cp37-cp37m-win_amd64.whl (3.7 MB)
Collecting elasticsearch
Using cached elasticsearch-7.12.1-py2.py3-none-any.whl (339 kB)
Requirement already satisfied: boto3==1.16.43 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.16.43)
Requirement already satisfied: botocore==1.19.43 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.19.43)
Collecting moto[s3]==1.3.16
Using cached moto-1.3.16-py2.py3-none-any.whl (879 kB)
Collecting rarfile>=4.0
Using cached rarfile-4.0-py3-none-any.whl (28 kB)
Collecting tensorflow>=2.3
Using cached tensorflow-2.4.1-cp37-cp37m-win_amd64.whl (370.7 MB)
Requirement already satisfied: torch in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.8.1)
Requirement already satisfied: transformers in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.5.1)
Collecting bs4
Using cached bs4-0.0.1-py3-none-any.whl
Collecting conllu
Using cached conllu-4.4-py2.py3-none-any.whl (15 kB)
Collecting langdetect
Using cached langdetect-1.0.8-py3-none-any.whl
Collecting lxml
Using cached lxml-4.6.3-cp37-cp37m-win_amd64.whl (3.5 MB)
Collecting mwparserfromhell
Using cached mwparserfromhell-0.6-cp37-cp37m-win_amd64.whl (101 kB)
Collecting nltk
Using cached nltk-3.6.2-py3-none-any.whl (1.5 MB)
Collecting openpyxl
Using cached openpyxl-3.0.7-py2.py3-none-any.whl (243 kB)
Collecting py7zr
Using cached py7zr-0.15.2-py3-none-any.whl (66 kB)
Collecting tldextract
Using cached tldextract-3.1.0-py2.py3-none-any.whl (87 kB)
Collecting zstandard
Using cached zstandard-0.15.2-cp37-cp37m-win_amd64.whl (582 kB)
Collecting bert_score>=0.3.6
Using cached bert_score-0.3.9-py3-none-any.whl (59 kB)
Collecting rouge_score
Using cached rouge_score-0.0.4-py2.py3-none-any.whl (22 kB)
Collecting sacrebleu
Using cached sacrebleu-1.5.1-py3-none-any.whl (54 kB)
Requirement already satisfied: scipy in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.6.3)
Collecting seqeval
Using cached seqeval-1.2.2-py3-none-any.whl
Collecting sklearn
Using cached sklearn-0.0-py2.py3-none-any.whl
Collecting jiwer
Using cached jiwer-2.2.0-py3-none-any.whl (13 kB)
Requirement already satisfied: toml>=0.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.10.2)
Requirement already satisfied: requests_file>=1.5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.5.1)
Requirement already satisfied: texttable>=1.6.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.6.3)
Requirement already satisfied: s3fs>=0.4.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.4.2)
Requirement already satisfied: Werkzeug>=1.0.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.0.1)
Collecting black
Using cached black-21.4b2-py3-none-any.whl (130 kB)
Collecting isort
Using cached isort-5.8.0-py3-none-any.whl (103 kB)
Collecting flake8==3.7.9
Using cached flake8-3.7.9-py2.py3-none-any.whl (69 kB)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from boto3==1.16.43->datasets==1.5.0.dev0) (0.10.0)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from boto3==1.16.43->datasets==1.5.0.dev0) (0.3.7)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from botocore==1.19.43->datasets==1.5.0.dev0) (1.26.4)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from botocore==1.19.43->datasets==1.5.0.dev0) (2.8.1)
Collecting entrypoints<0.4.0,>=0.3.0
Using cached entrypoints-0.3-py2.py3-none-any.whl (11 kB)
Collecting pyflakes<2.2.0,>=2.1.0
Using cached pyflakes-2.1.1-py2.py3-none-any.whl (59 kB)
Collecting pycodestyle<2.6.0,>=2.5.0
Using cached pycodestyle-2.5.0-py2.py3-none-any.whl (51 kB)
Collecting mccabe<0.7.0,>=0.6.0
Using cached mccabe-0.6.1-py2.py3-none-any.whl (8.6 kB)
Requirement already satisfied: jsondiff>=1.1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.3.0)
Requirement already satisfied: pytz in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2021.1)
Requirement already satisfied: mock in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (4.0.3)
Requirement already satisfied: MarkupSafe<2.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.1.1)
Requirement already satisfied: python-jose[cryptography]<4.0.0,>=3.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.2.0)
Requirement already satisfied: aws-xray-sdk!=0.96,>=0.93 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.8.0)
Requirement already satisfied: cryptography>=2.3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.4.7)
Requirement already satisfied: more-itertools in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (8.7.0)
Requirement already satisfied: PyYAML>=5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (5.4.1)
Requirement already satisfied: boto>=2.36.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.49.0)
Requirement already satisfied: idna<3,>=2.5 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.10)
Requirement already satisfied: sshpubkeys>=3.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.3.1)
Requirement already satisfied: responses>=0.9.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.13.3)
Requirement already satisfied: xmltodict in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.12.0)
Requirement already satisfied: setuptools in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (52.0.0.post20210125)
Requirement already satisfied: Jinja2>=2.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.11.3)
Requirement already satisfied: zipp in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.4.1)
Requirement already satisfied: six>1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.15.0)
Requirement already satisfied: ecdsa<0.15 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.14.1)
Requirement already satisfied: docker>=2.5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (5.0.0)
Requirement already satisfied: cfn-lint>=0.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.49.0)
Requirement already satisfied: grpcio<2,>=1.29.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (1.32.0)
Collecting hdfs<3.0.0,>=2.1.0
Using cached hdfs-2.6.0-py3-none-any.whl (33 kB)
Collecting pyarrow>=0.17.1
Using cached pyarrow-3.0.0-cp37-cp37m-win_amd64.whl (12.6 MB)
Collecting fastavro<2,>=0.21.4
Using cached fastavro-1.4.0-cp37-cp37m-win_amd64.whl (394 kB)
Requirement already satisfied: httplib2<0.18.0,>=0.8 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.17.4)
Collecting pymongo<4.0.0,>=3.8.0
Using cached pymongo-3.11.3-cp37-cp37m-win_amd64.whl (382 kB)
Collecting crcmod<2.0,>=1.7
Using cached crcmod-1.7-py3-none-any.whl
Collecting avro-python3!=1.9.2,<1.10.0,>=1.8.1
Using cached avro_python3-1.9.2.1-py3-none-any.whl
Requirement already satisfied: typing-extensions<3.8.0,>=3.7.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (3.7.4.3)
Requirement already satisfied: future<1.0.0,>=0.18.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.18.2)
Collecting oauth2client<5,>=2.0.1
Using cached oauth2client-4.1.3-py2.py3-none-any.whl (98 kB)
Collecting pydot<2,>=1.2.0
Using cached pydot-1.4.2-py2.py3-none-any.whl (21 kB)
Requirement already satisfied: protobuf<4,>=3.12.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (3.15.8)
Requirement already satisfied: wrapt in c:\programdata\anaconda3\envs\env\lib\site-packages (from aws-xray-sdk!=0.96,>=0.93->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.12.1)
Collecting matplotlib
Using cached matplotlib-3.4.1-cp37-cp37m-win_amd64.whl (7.1 MB)
Requirement already satisfied: junit-xml~=1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.9)
Requirement already satisfied: jsonpatch in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.32)
Requirement already satisfied: jsonschema~=3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.2.0)
Requirement already satisfied: networkx~=2.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.5.1)
Requirement already satisfied: aws-sam-translator>=1.35.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.35.0)
Requirement already satisfied: cffi>=1.12 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cryptography>=2.3.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.14.5)
Requirement already satisfied: pycparser in c:\programdata\anaconda3\envs\env\lib\site-packages (from cffi>=1.12->cryptography>=2.3.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.20)
Requirement already satisfied: pywin32==227 in c:\programdata\anaconda3\envs\env\lib\site-packages (from docker>=2.5.1->moto[s3]==1.3.16->datasets==1.5.0.dev0) (227)
Requirement already satisfied: websocket-client>=0.32.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from docker>=2.5.1->moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.58.0)
Requirement already satisfied: docopt in c:\programdata\anaconda3\envs\env\lib\site-packages (from hdfs<3.0.0,>=2.1.0->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.6.2)
Requirement already satisfied: filelock in c:\programdata\anaconda3\envs\env\lib\site-packages (from huggingface_hub<0.1.0->datasets==1.5.0.dev0) (3.0.12)
Requirement already satisfied: pyrsistent>=0.14.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonschema~=3.0->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.17.3)
Requirement already satisfied: attrs>=17.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonschema~=3.0->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (20.3.0)
Requirement already satisfied: decorator<5,>=4.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from networkx~=2.4->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (4.4.2)
Requirement already satisfied: rsa>=3.1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (4.7.2)
Requirement already satisfied: pyasn1-modules>=0.0.5 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.2.8)
Requirement already satisfied: pyasn1>=0.1.7 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.4.8)
Requirement already satisfied: pyparsing>=2.1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pydot<2,>=1.2.0->apache-beam>=2.24.0->datasets==1.5.0.dev0) (2.4.7)
Requirement already satisfied: certifi>=2017.4.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests>=2.19.0->datasets==1.5.0.dev0) (2020.12.5)
Requirement already satisfied: chardet<5,>=3.0.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests>=2.19.0->datasets==1.5.0.dev0) (4.0.0)
Collecting keras-preprocessing~=1.1.2
Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
Requirement already satisfied: termcolor~=1.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (1.1.0)
Requirement already satisfied: tensorboard~=2.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (2.5.0)
Requirement already satisfied: wheel~=0.35 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (0.36.2)
Collecting opt-einsum~=3.3.0
Using cached opt_einsum-3.3.0-py3-none-any.whl (65 kB)
Collecting gast==0.3.3
Using cached gast-0.3.3-py2.py3-none-any.whl (9.7 kB)
Collecting google-pasta~=0.2
Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB)
Requirement already satisfied: tensorflow-estimator<2.5.0,>=2.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (2.4.0)
Collecting astunparse~=1.6.3
Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Collecting flatbuffers~=1.12.0
Using cached flatbuffers-1.12-py2.py3-none-any.whl (15 kB)
Collecting h5py~=2.10.0
Using cached h5py-2.10.0-cp37-cp37m-win_amd64.whl (2.5 MB)
Requirement already satisfied: markdown>=2.6.8 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (3.3.4)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.8.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (0.4.4)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (0.6.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.30.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (4.2.2)
Requirement already satisfied: requests-oauthlib>=0.7.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.3.0)
Requirement already satisfied: oauthlib>=3.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (3.1.0)
Requirement already satisfied: regex!=2019.12.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (2021.4.4)
Requirement already satisfied: tokenizers<0.11,>=0.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (0.10.2)
Requirement already satisfied: sacremoses in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (0.0.45)
Requirement already satisfied: packaging in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (20.9)
Collecting pathspec<1,>=0.8.1
Using cached pathspec-0.8.1-py2.py3-none-any.whl (28 kB)
Requirement already satisfied: click>=7.1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from black->datasets==1.5.0.dev0) (7.1.2)
Collecting appdirs
Using cached appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB)
Collecting mypy-extensions>=0.4.3
Using cached mypy_extensions-0.4.3-py2.py3-none-any.whl (4.5 kB)
Requirement already satisfied: typed-ast>=1.4.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from black->datasets==1.5.0.dev0) (1.4.3)
Collecting beautifulsoup4
Using cached beautifulsoup4-4.9.3-py3-none-any.whl (115 kB)
Requirement already satisfied: soupsieve>1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from beautifulsoup4->bs4->datasets==1.5.0.dev0) (2.2.1)
Collecting python-Levenshtein
Using cached python-Levenshtein-0.12.2.tar.gz (50 kB)
Requirement already satisfied: jsonpointer>=1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonpatch->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.1)
Requirement already satisfied: pillow>=6.2.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (8.2.0)
Requirement already satisfied: cycler>=0.10 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (1.3.1)
Collecting multiprocess
Using cached multiprocess-0.70.11-py3-none-any.whl (98 kB)
Using cached multiprocess-0.70.10.zip (2.4 MB)
Using cached multiprocess-0.70.9-py3-none-any.whl
Requirement already satisfied: joblib in c:\programdata\anaconda3\envs\env\lib\site-packages (from nltk->datasets==1.5.0.dev0) (1.0.1)
Collecting et-xmlfile
Using cached et_xmlfile-1.1.0-py3-none-any.whl (4.7 kB)
Requirement already satisfied: pyzstd<0.15.0,>=0.14.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from py7zr->datasets==1.5.0.dev0) (0.14.4)
Collecting pyppmd<0.13.0,>=0.12.1
Using cached pyppmd-0.12.1-cp37-cp37m-win_amd64.whl (32 kB)
Collecting pycryptodome>=3.6.6
Using cached pycryptodome-3.10.1-cp35-abi3-win_amd64.whl (1.6 MB)
Collecting bcj-cffi<0.6.0,>=0.5.1
Using cached bcj_cffi-0.5.1-cp37-cp37m-win_amd64.whl (21 kB)
Collecting multivolumefile<0.3.0,>=0.2.0
Using cached multivolumefile-0.2.3-py3-none-any.whl (17 kB)
Requirement already satisfied: iniconfig in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.1.1)
Requirement already satisfied: py>=1.8.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.10.0)
Requirement already satisfied: pluggy<1.0.0a1,>=0.12 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (0.13.1)
Requirement already satisfied: atomicwrites>=1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.4.0)
Requirement already satisfied: colorama in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (0.4.4)
Collecting pytest-forked
Using cached pytest_forked-1.3.0-py2.py3-none-any.whl (4.7 kB)
Collecting execnet>=1.1
Using cached execnet-1.8.0-py2.py3-none-any.whl (39 kB)
Requirement already satisfied: apipkg>=1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from execnet>=1.1->pytest-xdist->datasets==1.5.0.dev0) (1.5)
Collecting portalocker==2.0.0
Using cached portalocker-2.0.0-py2.py3-none-any.whl (11 kB)
Requirement already satisfied: scikit-learn>=0.21.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from seqeval->datasets==1.5.0.dev0) (0.24.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from scikit-learn>=0.21.3->seqeval->datasets==1.5.0.dev0) (2.1.0)
Building wheels for collected packages: python-Levenshtein
Building wheel for python-Levenshtein (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\VKC~1\AppData\Local\Temp\pip-wheel-8jh7fm18'
cwd: C:\Users\VKC~1\AppData\Local\Temp\pip-install-ynt_dbm4\python-levenshtein_c02e7e6f9def4629a475349654670ae9\
Complete output (27 lines):
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
creating build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\StringMatcher.py -> build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\__init__.py -> build\lib.win-amd64-3.7\Levenshtein
running egg_info
writing python_Levenshtein.egg-info\PKG-INFO
writing dependency_links to python_Levenshtein.egg-info\dependency_links.txt
writing entry points to python_Levenshtein.egg-info\entry_points.txt
writing namespace_packages to python_Levenshtein.egg-info\namespace_packages.txt
writing requirements to python_Levenshtein.egg-info\requires.txt
writing top-level names to python_Levenshtein.egg-info\top_level.txt
reading manifest file 'python_Levenshtein.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no previously-included files matching '*pyc' found anywhere in distribution
warning: no previously-included files matching '*so' found anywhere in distribution
warning: no previously-included files matching '.project' found anywhere in distribution
warning: no previously-included files matching '.pydevproject' found anywhere in distribution
writing manifest file 'python_Levenshtein.egg-info\SOURCES.txt'
copying Levenshtein\_levenshtein.c -> build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\_levenshtein.h -> build\lib.win-amd64-3.7\Levenshtein
running build_ext
building 'Levenshtein._levenshtein' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
----------------------------------------
ERROR: Failed building wheel for python-Levenshtein
Running setup.py clean for python-Levenshtein
Failed to build python-Levenshtein
Installing collected packages: python-Levenshtein, pytest-forked, pyppmd, pymongo, pyflakes, pydot, pycryptodome, pycodestyle, pyarrow, portalocker, pathspec, pandas, opt-einsum, oauth2client, nltk, mypy-extensions, multivolumefile, multiprocess, moto, mccabe, matplotlib, keras-preprocessing, huggingface-hub, hdfs, h5py, google-pasta, gast, flatbuffers, fastavro, execnet, et-xmlfile, entrypoints, crcmod, beautifulsoup4, bcj-cffi, avro-python3, astunparse, appdirs, zstandard, tldextract, tensorflow, sklearn, seqeval, sacrebleu, rouge-score, rarfile, pytest-xdist, py7zr, openpyxl, mwparserfromhell, lxml, langdetect, jiwer, isort, flake8, elasticsearch, datasets, conllu, bs4, black, bert-score, apache-beam
Running setup.py install for python-Levenshtein ... error
ERROR: Command errored out with exit status 1:
command: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\VKC~1\AppData\Local\Temp\pip-record-v7l7zitb\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\envs\env\Include\python-Levenshtein'
cwd: C:\Users\VKC~1\AppData\Local\Temp\pip-install-ynt_dbm4\python-levenshtein_c02e7e6f9def4629a475349654670ae9\
Complete output (27 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
creating build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\StringMatcher.py -> build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\__init__.py -> build\lib.win-amd64-3.7\Levenshtein
running egg_info
writing python_Levenshtein.egg-info\PKG-INFO
writing dependency_links to python_Levenshtein.egg-info\dependency_links.txt
writing entry points to python_Levenshtein.egg-info\entry_points.txt
writing namespace_packages to python_Levenshtein.egg-info\namespace_packages.txt
writing requirements to python_Levenshtein.egg-info\requires.txt
writing top-level names to python_Levenshtein.egg-info\top_level.txt
reading manifest file 'python_Levenshtein.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no previously-included files matching '*pyc' found anywhere in distribution
warning: no previously-included files matching '*so' found anywhere in distribution
warning: no previously-included files matching '.project' found anywhere in distribution
warning: no previously-included files matching '.pydevproject' found anywhere in distribution
writing manifest file 'python_Levenshtein.egg-info\SOURCES.txt'
copying Levenshtein\_levenshtein.c -> build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\_levenshtein.h -> build\lib.win-amd64-3.7\Levenshtein
running build_ext
building 'Levenshtein._levenshtein' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\VKC~1\AppData\Local\Temp\pip-record-v7l7zitb\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\envs\env\Include\python-Levenshtein' Check the logs for full command output.
```
Here are conda and python versions:
```bat
(env) C:\testing\datasets>conda --version
conda 4.9.2
(env) C:\testing\datasets>python --version
Python 3.7.10
```
Please help me out. Thanks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2301/timeline | null | completed | null | null | false |