url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.28B
| node_id
stringlengths 18
32
| number
int64 1
4.56k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,656B
| updated_at
int64 1,587B
1,656B
| closed_at
int64 1,587B
1,656B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 1
value | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2533 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2533/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2533/comments | https://api.github.com/repos/huggingface/datasets/issues/2533/events | https://github.com/huggingface/datasets/pull/2533 | 927,193,264 | MDExOlB1bGxSZXF1ZXN0Njc1Mzg2OTMw | 2,533 | Add task template for automatic speech recognition | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@SBrandeis @lhoestq i've integrated your suggestions, so this is ready for another review :)",
"Merging if it's good for you @lewtun :)"
] | 1,624,365,902,000 | 1,624,464,886,000 | 1,624,463,817,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2533",
"html_url": "https://github.com/huggingface/datasets/pull/2533",
"diff_url": "https://github.com/huggingface/datasets/pull/2533.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2533.patch",
"merged_at": 1624463817000
} | This PR adds a task template for automatic speech recognition. In this task, the input is a path to an audio file which the model consumes to produce a transcription.
Usage:
```python
from datasets import load_dataset
from datasets.tasks import AutomaticSpeechRecognition
ds = load_dataset("timit_asr", split="train[:10]")
# Dataset({
# features: ['file', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'],
# num_rows: 10
# })
task = AutomaticSpeechRecognition(audio_file_column="file", transcription_column="text")
ds.prepare_for_task(task)
# Dataset({
# features: ['audio_file', 'transcription'],
# num_rows: 10
# })
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2533/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2532 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2532/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2532/comments | https://api.github.com/repos/huggingface/datasets/issues/2532/events | https://github.com/huggingface/datasets/issues/2532 | 927,063,196 | MDU6SXNzdWU5MjcwNjMxOTY= | 2,532 | Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task | {
"login": "jerryIsHere",
"id": 50871412,
"node_id": "MDQ6VXNlcjUwODcxNDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerryIsHere",
"html_url": "https://github.com/jerryIsHere",
"followers_url": "https://api.github.com/users/jerryIsHere/followers",
"following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}",
"gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions",
"organizations_url": "https://api.github.com/users/jerryIsHere/orgs",
"repos_url": "https://api.github.com/users/jerryIsHere/repos",
"events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}",
"received_events_url": "https://api.github.com/users/jerryIsHere/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @jerryIsHere, thanks for reporting the issue. But are you sure this is a bug in HuggingFace **Datasets**?",
"> Hi @jerryIsHere, thanks for reporting the issue. But are you sure this is a bug in HuggingFace **Datasets**?\r\n\r\nOh, I am sorry\r\nI would reopen the post on huggingface/transformers"
] | 1,624,356,498,000 | 1,624,425,445,000 | 1,624,425,445,000 | CONTRIBUTOR | null | null | null | [This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging example](https://huggingface.co/transformers/custom_datasets.html#tok-ner).
The pipeline works fine with most instance in different languages, but unfortunately, [the Japanese Kana ligature (a form of abbreviation? I don't know Japanese well)](https://en.wikipedia.org/wiki/Kana_ligature) break the alignment of `return_offsets_mapping`:
![image](https://user-images.githubusercontent.com/50871412/122904371-db192700-d382-11eb-8917-1775db76db69.png)
Without the try catch block, it riase `ValueError: NumPy boolean array indexing assignment cannot assign 88 input values to the 87 output values where the mask is true`, example shown here [(another colab notebook)](https://colab.research.google.com/drive/1MmOqf3ppzzdKKyMWkn0bJy6DqzOO0SSm?usp=sharing)
It is clear that the normalizer is the process that break the alignment, as it is observed that `tokenizer._tokenizer.normalizer.normalize_str('ヿ')` return 'コト'.
One workaround is to include `tokenizer._tokenizer.normalizer.normalize_str` before the tokenizer preprocessing pipeline, which is also provided in the [first colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) with the name `udposTestDatasetWorkaround`.
I guess similar logics should be included inside the tokenizer and the offsets_mapping generation process such that user don't need to include them in their code. But I don't understand the code of tokenizer well that I think I am not able to do this.
p.s.
**I am using my own dataset building script in the provided example, but the script should be equivalent to the changes made by this [update](https://github.com/huggingface/datasets/pull/2466)**
`get_dataset `is just a simple wrapping for `load_dataset`
and the `tokenizer` is just `XLMRobertaTokenizerFast.from_pretrained("xlm-roberta-large")` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2532/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2531 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2531/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2531/comments | https://api.github.com/repos/huggingface/datasets/issues/2531/events | https://github.com/huggingface/datasets/pull/2531 | 927,017,924 | MDExOlB1bGxSZXF1ZXN0Njc1MjM2MDYz | 2,531 | Fix dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,353,430,000 | 1,624,355,230,000 | 1,624,355,229,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2531",
"html_url": "https://github.com/huggingface/datasets/pull/2531",
"diff_url": "https://github.com/huggingface/datasets/pull/2531.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2531.patch",
"merged_at": 1624355229000
} | The dev version that ends in `.dev0` should be greater than the current version.
However it happens that `1.8.0 > 1.8.0.dev0` for example.
Therefore we need to use `1.8.1.dev0` for example in this case.
I updated the dev version to use `1.8.1.dev0`, and I also added a comment in the setup.py in the release steps about this. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2531/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2530 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2530/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2530/comments | https://api.github.com/repos/huggingface/datasets/issues/2530/events | https://github.com/huggingface/datasets/pull/2530 | 927,013,773 | MDExOlB1bGxSZXF1ZXN0Njc1MjMyNDk0 | 2,530 | Fixed label parsing in the ProductReviews dataset | {
"login": "yavuzKomecoglu",
"id": 5150963,
"node_id": "MDQ6VXNlcjUxNTA5NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yavuzKomecoglu",
"html_url": "https://github.com/yavuzKomecoglu",
"followers_url": "https://api.github.com/users/yavuzKomecoglu/followers",
"following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}",
"gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions",
"organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs",
"repos_url": "https://api.github.com/users/yavuzKomecoglu/repos",
"events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}",
"received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq, can you please review this PR?\r\nWhat exactly is the problem in the test case? Should it matter?",
"Hi ! Thanks for fixing this :)\r\n\r\nThe CI fails for two reasons:\r\n- the `pretty_name` tag is missing in yaml tags in ./datasets/turkish_product_reviews/README.md. You can fix that by adding this in the yaml tags:\r\n```yaml\r\npretty_name: Turkish Product Reviews\r\n```\r\n- The test that runs the turkish_product_reviews.py file on the dummy_data.zip data returned 0 examples. Indeed it looks like you changed dummy_data.zip file and now it is an empty zip file. I think you can fix that by reverting your change to the dummy_data.zip file",
"> Hi ! Thanks for fixing this :)\r\n> \r\n> The CI fails for two reasons:\r\n> \r\n> * the `pretty_name` tag is missing in yaml tags in ./datasets/turkish_product_reviews/README.md. You can fix that by adding this in the yaml tags:\r\n> \r\n> \r\n> ```yaml\r\n> pretty_name: Turkish Product Reviews\r\n> ```\r\n> \r\n> * The test that runs the turkish_product_reviews.py file on the dummy_data.zip data returned 0 examples. Indeed it looks like you changed dummy_data.zip file and now it is an empty zip file. I think you can fix that by reverting your change to the dummy_data.zip file\r\n\r\nMany thanks for the quick feedback.\r\nI made the relevant fixes but still got the error :(",
"> Thanks !\r\n> The CI was failing because of the dataset card that was missing some sections. I fixed that.\r\n> \r\n> It's all good now\r\n\r\nSuper. Thanks for the support."
] | 1,624,353,165,000 | 1,624,366,520,000 | 1,624,366,360,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2530",
"html_url": "https://github.com/huggingface/datasets/pull/2530",
"diff_url": "https://github.com/huggingface/datasets/pull/2530.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2530.patch",
"merged_at": 1624366360000
} | Fixed issue with parsing dataset labels. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2530/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2529 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2529/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2529/comments | https://api.github.com/repos/huggingface/datasets/issues/2529/events | https://github.com/huggingface/datasets/pull/2529 | 926,378,812 | MDExOlB1bGxSZXF1ZXN0Njc0NjkxNjA5 | 2,529 | Add summarization template | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> Nice thanks !\r\n> Could you just move the test outside of the BaseDatasetTest class please ? Otherwise it will unnecessarily be run twice.\r\n\r\nsure, on it! thanks for the explanations about the `self._to` method :)",
"@lhoestq i've moved all the task template tests outside of `BaseDatasetTest` and collected them in their dedicated test case. (at some point i'll revisit this so we can just use `pytest` natively, but the PR is already getting out-of-scope :))"
] | 1,624,291,711,000 | 1,624,458,131,000 | 1,624,455,010,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2529",
"html_url": "https://github.com/huggingface/datasets/pull/2529",
"diff_url": "https://github.com/huggingface/datasets/pull/2529.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2529.patch",
"merged_at": 1624455010000
} | This PR adds a task template for text summarization. As far as I can tell, we do not need to distinguish between "extractive" or "abstractive" summarization - both can be handled with this template.
Usage:
```python
from datasets import load_dataset
from datasets.tasks import Summarization
ds = load_dataset("xsum", split="train")
# Dataset({
# features: ['document', 'summary', 'id'],
# num_rows: 204045
# })
summarization = Summarization(text_column="document", summary_column="summary")
ds.prepare_for_task(summarization)
# Dataset({
# features: ['text', 'summary'],
# num_rows: 204045
# })
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2529/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2528 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2528/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2528/comments | https://api.github.com/repos/huggingface/datasets/issues/2528/events | https://github.com/huggingface/datasets/issues/2528 | 926,314,656 | MDU6SXNzdWU5MjYzMTQ2NTY= | 2,528 | Logging cannot be set to NOTSET similar to transformers | {
"login": "joshzwiebel",
"id": 34662010,
"node_id": "MDQ6VXNlcjM0NjYyMDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/34662010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshzwiebel",
"html_url": "https://github.com/joshzwiebel",
"followers_url": "https://api.github.com/users/joshzwiebel/followers",
"following_url": "https://api.github.com/users/joshzwiebel/following{/other_user}",
"gists_url": "https://api.github.com/users/joshzwiebel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshzwiebel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshzwiebel/subscriptions",
"organizations_url": "https://api.github.com/users/joshzwiebel/orgs",
"repos_url": "https://api.github.com/users/joshzwiebel/repos",
"events_url": "https://api.github.com/users/joshzwiebel/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshzwiebel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @joshzwiebel, thanks for reporting. We are going to align with `transformers`."
] | 1,624,287,894,000 | 1,624,545,767,000 | 1,624,545,767,000 | NONE | null | null | null | ## Describe the bug
In the transformers library you can set the verbosity level to logging.NOTSET to work around the usage of tqdm and IPywidgets, however in Datasets this is no longer possible. This is because transformers set the verbosity level of tqdm with [this](https://github.com/huggingface/transformers/blob/b53bc55ba9bb10d5ee279eab51a2f0acc5af2a6b/src/transformers/file_utils.py#L1449)
`disable=bool(logging.get_verbosity() == logging.NOTSET)`
and datasets accomplishes this like [so](https://github.com/huggingface/datasets/blob/83554e410e1ab8c6f705cfbb2df7953638ad3ac1/src/datasets/utils/file_utils.py#L493)
`not_verbose = bool(logger.getEffectiveLevel() > WARNING)`
## Steps to reproduce the bug
```python
import datasets
import logging
datasets.logging.get_verbosity = lambda : logging.NOTSET
datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy")
```
## Expected results
The code should download and load the dataset as normal without displaying progress bars
## Actual results
```ImportError Traceback (most recent call last)
<ipython-input-4-aec65c0509c6> in <module>
----> 1 datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy")
~/venv/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, **config_kwargs)
713 dataset=True,
714 return_resolved_file_path=True,
--> 715 use_auth_token=use_auth_token,
716 )
717 # Set the base path for downloads as the parent of the script location
~/venv/lib/python3.7/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, **download_kwargs)
350 file_path = hf_bucket_url(path, filename=name, dataset=False)
351 try:
--> 352 local_path = cached_path(file_path, download_config=download_config)
353 except FileNotFoundError:
354 raise FileNotFoundError(
~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
289 use_etag=download_config.use_etag,
290 max_retries=download_config.max_retries,
--> 291 use_auth_token=download_config.use_auth_token,
292 )
293 elif os.path.exists(url_or_filename):
~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
668 headers=headers,
669 cookies=cookies,
--> 670 max_retries=max_retries,
671 )
672
~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in http_get(url, temp_file, proxies, resume_size, headers, cookies, timeout, max_retries)
493 initial=resume_size,
494 desc="Downloading",
--> 495 disable=not_verbose,
496 )
497 for chunk in response.iter_content(chunk_size=1024):
~/venv/lib/python3.7/site-packages/tqdm/notebook.py in __init__(self, *args, **kwargs)
217 total = self.total * unit_scale if self.total else self.total
218 self.container = self.status_printer(
--> 219 self.fp, total, self.desc, self.ncols)
220 self.sp = self.display
221
~/venv/lib/python3.7/site-packages/tqdm/notebook.py in status_printer(_, total, desc, ncols)
95 if IProgress is None: # #187 #451 #558 #872
96 raise ImportError(
---> 97 "IProgress not found. Please update jupyter and ipywidgets."
98 " See https://ipywidgets.readthedocs.io/en/stable"
99 "/user_install.html")
ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: Linux-5.4.95-42.163.amzn2.x86_64-x86_64-with-debian-10.8
- Python version: 3.7.10
- PyArrow version: 3.0.0
I am running this code on Deepnote and which important to this issue **does not** support IPywidgets
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2528/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2527 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2527/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2527/comments | https://api.github.com/repos/huggingface/datasets/issues/2527/events | https://github.com/huggingface/datasets/pull/2527 | 926,031,525 | MDExOlB1bGxSZXF1ZXN0Njc0MzkzNjQ5 | 2,527 | Replace bad `n>1M` size tag | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,268,555,000 | 1,624,288,010,000 | 1,624,288,009,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2527",
"html_url": "https://github.com/huggingface/datasets/pull/2527",
"diff_url": "https://github.com/huggingface/datasets/pull/2527.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2527.patch",
"merged_at": 1624288009000
} | Some datasets were still using the old `n>1M` tag which has been replaced with tags `1M<n<10M`, etc.
This resulted in unexpected results when searching for datasets bigger than 1M on the hub, since it was only showing the ones with the tag `n>1M`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2527/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2526 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2526/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2526/comments | https://api.github.com/repos/huggingface/datasets/issues/2526/events | https://github.com/huggingface/datasets/issues/2526 | 925,929,228 | MDU6SXNzdWU5MjU5MjkyMjg= | 2,526 | Add COCO datasets | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | open | false | {
"login": "merveenoyan",
"id": 53175384,
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/merveenoyan",
"html_url": "https://github.com/merveenoyan",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "merveenoyan",
"id": 53175384,
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/merveenoyan",
"html_url": "https://github.com/merveenoyan",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I'm currently adding it, the entire dataset is quite big around 30 GB so I add splits separately. You can take a look here https://huggingface.co/datasets/merve/coco",
"I talked to @lhoestq and it's best if I download this dataset through TensorFlow datasets instead, so I'll be implementing that one really soon.\r\n@NielsRogge ",
"I started adding COCO, will be done tomorrow EOD\r\nmy work so far https://github.com/merveenoyan/datasets (my fork)",
"Hi Merve @merveenoyan , thank you so much for your great contribution! May I ask about the current progress of your implementation? Cuz I see the pull request is still in progess here. Or can I just run the COCO scripts in your fork repo?",
"Hello @yixuanren I had another prioritized project about to be merged, but I'll start continuing today will finish up soon. ",
"> Hello @yixuanren I had another prioritized project about to be merged, but I'll start continuing today will finish up soon.\r\n\r\nIt's really nice of you!! I see you've commited another version just now",
"@yixuanren we're working on it, will be available soon, thanks a lot for your patience"
] | 1,624,261,712,000 | 1,640,007,218,000 | null | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** COCO
- **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset.
- **Paper + website:** https://cocodataset.org/#home
- **Data:** https://cocodataset.org/#download
- **Motivation:** It would be great to have COCO available in HuggingFace datasets, as we are moving beyond just text. COCO includes multi-modalities (images + text), as well as a huge amount of images annotated with objects, segmentation masks, keypoints etc., on which models like DETR (which I recently added to HuggingFace Transformers) are trained. Currently, one needs to download everything from the website and place it in a local folder, but it would be much easier if we can directly access it through the datasets API.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2526/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2526/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2525 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2525/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2525/comments | https://api.github.com/repos/huggingface/datasets/issues/2525/events | https://github.com/huggingface/datasets/pull/2525 | 925,896,358 | MDExOlB1bGxSZXF1ZXN0Njc0Mjc5MTgy | 2,525 | Use scikit-learn package rather than sklearn in setup.py | {
"login": "lesteve",
"id": 1680079,
"node_id": "MDQ6VXNlcjE2ODAwNzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1680079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lesteve",
"html_url": "https://github.com/lesteve",
"followers_url": "https://api.github.com/users/lesteve/followers",
"following_url": "https://api.github.com/users/lesteve/following{/other_user}",
"gists_url": "https://api.github.com/users/lesteve/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lesteve/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lesteve/subscriptions",
"organizations_url": "https://api.github.com/users/lesteve/orgs",
"repos_url": "https://api.github.com/users/lesteve/repos",
"events_url": "https://api.github.com/users/lesteve/events{/privacy}",
"received_events_url": "https://api.github.com/users/lesteve/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,259,065,000 | 1,624,269,673,000 | 1,624,265,853,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2525",
"html_url": "https://github.com/huggingface/datasets/pull/2525",
"diff_url": "https://github.com/huggingface/datasets/pull/2525.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2525.patch",
"merged_at": 1624265853000
} | The sklearn package is an historical thing and should probably not be used by anyone, see https://github.com/scikit-learn/scikit-learn/issues/8215#issuecomment-344679114 for some caveats.
Note: this affects only TESTS_REQUIRE so I guess only developers not end users. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2525/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2524 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2524/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2524/comments | https://api.github.com/repos/huggingface/datasets/issues/2524/events | https://github.com/huggingface/datasets/pull/2524 | 925,610,934 | MDExOlB1bGxSZXF1ZXN0Njc0MDQzNzk1 | 2,524 | Raise FileNotFoundError in WindowsFileLock | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Could you clarify what it fixes exactly and give more details please ? Especially why this is related to the windows hanging error ?",
"This has already been merged, but I'll clarify the idea of this PR. Before this merge, FileLock was the only component affected by the max path limit on Windows (that came to my notice) because of its infinite loop that would suppress errors. So instead of suppressing the `FileNotFoundError` that is thrown by `os.open` if the file name is longer than the max allowed path length, this PR reraises it to notify the user."
] | 1,624,199,111,000 | 1,624,874,182,000 | 1,624,870,059,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2524",
"html_url": "https://github.com/huggingface/datasets/pull/2524",
"diff_url": "https://github.com/huggingface/datasets/pull/2524.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2524.patch",
"merged_at": 1624870059000
} | Closes #2443 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2524/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2523 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2523/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2523/comments | https://api.github.com/repos/huggingface/datasets/issues/2523/events | https://github.com/huggingface/datasets/issues/2523 | 925,421,008 | MDU6SXNzdWU5MjU0MjEwMDg= | 2,523 | Fr | {
"login": "aDrIaNo34500",
"id": 71971234,
"node_id": "MDQ6VXNlcjcxOTcxMjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/71971234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aDrIaNo34500",
"html_url": "https://github.com/aDrIaNo34500",
"followers_url": "https://api.github.com/users/aDrIaNo34500/followers",
"following_url": "https://api.github.com/users/aDrIaNo34500/following{/other_user}",
"gists_url": "https://api.github.com/users/aDrIaNo34500/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aDrIaNo34500/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aDrIaNo34500/subscriptions",
"organizations_url": "https://api.github.com/users/aDrIaNo34500/orgs",
"repos_url": "https://api.github.com/users/aDrIaNo34500/repos",
"events_url": "https://api.github.com/users/aDrIaNo34500/events{/privacy}",
"received_events_url": "https://api.github.com/users/aDrIaNo34500/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,118,192,000 | 1,624,128,503,000 | 1,624,128,503,000 | NONE | null | null | null | __Originally posted by @lewtun in https://github.com/huggingface/datasets/pull/2469__ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2523/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2522 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2522/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2522/comments | https://api.github.com/repos/huggingface/datasets/issues/2522/events | https://github.com/huggingface/datasets/issues/2522 | 925,334,379 | MDU6SXNzdWU5MjUzMzQzNzk= | 2,522 | Documentation Mistakes in Dataset: emotion | {
"login": "GDGauravDutta",
"id": 62606251,
"node_id": "MDQ6VXNlcjYyNjA2MjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/62606251?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GDGauravDutta",
"html_url": "https://github.com/GDGauravDutta",
"followers_url": "https://api.github.com/users/GDGauravDutta/followers",
"following_url": "https://api.github.com/users/GDGauravDutta/following{/other_user}",
"gists_url": "https://api.github.com/users/GDGauravDutta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GDGauravDutta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GDGauravDutta/subscriptions",
"organizations_url": "https://api.github.com/users/GDGauravDutta/orgs",
"repos_url": "https://api.github.com/users/GDGauravDutta/repos",
"events_url": "https://api.github.com/users/GDGauravDutta/events{/privacy}",
"received_events_url": "https://api.github.com/users/GDGauravDutta/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi,\r\n\r\nthis issue has been already reported in the dataset repo (https://github.com/dair-ai/emotion_dataset/issues/2), so this is a bug on their side.",
"The documentation has another bug in the dataset card [here](https://huggingface.co/datasets/emotion). \r\n\r\nIn the dataset summary **six** emotions are mentioned: *\"six basic emotions: anger, fear, joy, love, sadness, and surprise\"*, however, in the datafields section we have only **five**:\r\n```\r\nlabel: a classification label, with possible values including sadness (0), joy (1), love (2), anger (3), fear (4).\r\n```"
] | 1,624,086,537,000 | 1,643,109,239,000 | null | NONE | null | null | null | As per documentation,
Dataset: emotion
Homepage: https://github.com/dair-ai/emotion_dataset
Dataset: https://github.com/huggingface/datasets/blob/master/datasets/emotion/emotion.py
Permalink: https://huggingface.co/datasets/viewer/?dataset=emotion
Emotion is a dataset of English Twitter messages with eight basic emotions: anger, anticipation, disgust, fear, joy, sadness, surprise, and trust. For more detailed information please refer to the paper.
But when we view the data, there are only 6 emotions, anger, fear, joy, sadness, surprise, and trust. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2522/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2522/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2521 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2521/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2521/comments | https://api.github.com/repos/huggingface/datasets/issues/2521/events | https://github.com/huggingface/datasets/pull/2521 | 925,030,685 | MDExOlB1bGxSZXF1ZXN0NjczNTgxNzQ4 | 2,521 | Insert text classification template for Emotion dataset | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,031,779,000 | 1,624,267,351,000 | 1,624,267,351,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2521",
"html_url": "https://github.com/huggingface/datasets/pull/2521",
"diff_url": "https://github.com/huggingface/datasets/pull/2521.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2521.patch",
"merged_at": 1624267351000
} | This PR includes a template and updated `dataset_infos.json` for the `emotion` dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2521/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2520 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2520/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2520/comments | https://api.github.com/repos/huggingface/datasets/issues/2520/events | https://github.com/huggingface/datasets/issues/2520 | 925,015,004 | MDU6SXNzdWU5MjUwMTUwMDQ= | 2,520 | Datasets with tricky task templates | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067401494,
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion",
"name": "Dataset discussion",
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets"
}
] | open | false | null | [] | null | [] | 1,624,030,437,000 | 1,624,031,186,000 | null | MEMBER | null | null | null | I'm collecting a list of datasets here that don't follow the "standard" taxonomy and require further investigation to implement task templates for.
## Text classification
* [hatexplain](https://huggingface.co/datasets/hatexplain): ostensibly a form of text classification, but not in the standard `(text, target)` format and each sample appears to be tokenized.
* [muchocine](https://huggingface.co/datasets/muchocine): contains two candidate text columns (long-form and summary) which in principle requires two `TextClassification` templates which is not currently supported | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2520/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2519 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2519/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2519/comments | https://api.github.com/repos/huggingface/datasets/issues/2519/events | https://github.com/huggingface/datasets/pull/2519 | 924,903,240 | MDExOlB1bGxSZXF1ZXN0NjczNDcyMzYy | 2,519 | Improve performance of pandas arrow extractor | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks like this change\r\n```\r\npa_table[pa_table.column_names[0]].to_pandas(types_mapper=pandas_types_mapper)\r\n```\r\ndoesn't return a Series with the correct type.\r\nThis is related to https://issues.apache.org/jira/browse/ARROW-9664\r\n\r\nSince the types_mapper isn't taken into account, the ArrayXD types are not converted to the correct pandas extension dtype",
"@lhoestq I think I found a workaround... 😉 ",
"For some reason the benchmarks are not run Oo",
"Anyway, merging.\r\nWe'll see on master how much speed ups we got"
] | 1,624,022,681,000 | 1,624,266,366,000 | 1,624,266,366,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2519",
"html_url": "https://github.com/huggingface/datasets/pull/2519",
"diff_url": "https://github.com/huggingface/datasets/pull/2519.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2519.patch",
"merged_at": 1624266366000
} | While reviewing PR #2505, I noticed that pandas arrow extractor could be refactored to be faster. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2519/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2518 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2518/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2518/comments | https://api.github.com/repos/huggingface/datasets/issues/2518/events | https://github.com/huggingface/datasets/pull/2518 | 924,654,100 | MDExOlB1bGxSZXF1ZXN0NjczMjU5Nzg1 | 2,518 | Add task templates for tydiqa and xquad | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Just tested TydiQA and it works fine :)"
] | 1,624,003,594,000 | 1,624,028,477,000 | 1,624,027,833,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2518",
"html_url": "https://github.com/huggingface/datasets/pull/2518",
"diff_url": "https://github.com/huggingface/datasets/pull/2518.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2518.patch",
"merged_at": 1624027833000
} | This PR adds question-answering templates to the remaining datasets that are linked to a model on the Hub.
Notes:
* I could not test the tydiqa implementation since I don't have enough disk space 😢 . But I am confident the template works :)
* there exist other datasets like `fquad` and `mlqa` which are candidates for question-answering templates, but some work is needed to handle the ordering of nested column described in #2434
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2518/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2517 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2517/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2517/comments | https://api.github.com/repos/huggingface/datasets/issues/2517/events | https://github.com/huggingface/datasets/pull/2517 | 924,643,345 | MDExOlB1bGxSZXF1ZXN0NjczMjUwODk1 | 2,517 | Fix typo in MatthewsCorrelation class name | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,002,786,000 | 1,624,005,835,000 | 1,624,005,835,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2517",
"html_url": "https://github.com/huggingface/datasets/pull/2517",
"diff_url": "https://github.com/huggingface/datasets/pull/2517.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2517.patch",
"merged_at": 1624005835000
} | Close #2513. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2517/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2516 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2516/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2516/comments | https://api.github.com/repos/huggingface/datasets/issues/2516/events | https://github.com/huggingface/datasets/issues/2516 | 924,597,470 | MDU6SXNzdWU5MjQ1OTc0NzA= | 2,516 | datasets.map pickle issue resulting in invalid mapping function | {
"login": "david-waterworth",
"id": 5028974,
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/david-waterworth",
"html_url": "https://github.com/david-waterworth",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! `map` calls `__getstate__` using `dill` to hash your map function. This is used by the caching mechanism to recover previously computed results. That's why you don't see any `__setstate__` call.\r\n\r\nWhy do you change an attribute of your tokenizer when `__getstate__` is called ?",
"@lhoestq because if I try to pickle my custom tokenizer (it contains a pure python pretokenization step in an otherwise rust backed tokenizer) I get\r\n\r\n> Exception: Error while attempting to pickle Tokenizer: Custom PreTokenizer cannot be serialized\r\n\r\nSo I remove the Custom PreTokenizer in `__getstate__` and then restore it in `__setstate__` (since it doesn't contain any state). This is what my `__getstate__` / `__setstate__` looks like:\r\n\r\n def __getstate__(self):\r\n \"\"\"\r\n Removes pre_tokenizer since it cannot be pickled\r\n \"\"\"\r\n logger.debug(\"Copy state dict\")\r\n out = self.__dict__.copy()\r\n logger.debug(\"Detaching pre_tokenizer\")\r\n out['_tokenizer'].pre_tokenizer = tokenizers.pre_tokenizers.Sequence([]) \r\n return out\r\n\r\n def __setstate__(self, d):\r\n \"\"\"\r\n Reinstates pre_tokenizer\r\n \"\"\"\r\n logger.debug(\"Reattaching pre_tokenizer\")\r\n self.__dict__ = d\r\n self.backend_tokenizer.pre_tokenizer = self._pre_tokenizer()\r\n\r\nIf this is the case can you think of another way of avoiding my issue?",
"Actually, maybe I need to deep copy `self.__dict__`? That way `self` isn't modified. That was my intention and I thought it was working - I'll double-check after the weekend.",
"Doing a deep copy results in the warning:\r\n\r\n> 06/20/2021 16:02:15 - WARNING - datasets.fingerprint - Parameter 'function'=<function tokenize_function at 0x7f1e95f05d40> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\n\r\n\r\n```\r\ndef __getstate__(self):\r\n \"\"\"\r\n Removes pre_tokenizer since it cannot be pickled\r\n \"\"\"\r\n logger.debug(\"Copy state dict\")\r\n out = copy.deepcopy(self.__dict__)\r\n logger.debug(\"Detaching pre_tokenizer\")\r\n out['_tokenizer'].pre_tokenizer = tokenizers.pre_tokenizers.Sequence([]) \r\n return out\r\n```",
"Looks like there is still an object that is not pickable in your `tokenize_function` function.\r\n\r\nYou can test if an object can be pickled and hashed by using \r\n```python\r\nfrom datasets.fingerprint import Hasher\r\n\r\nHasher.hash(my_object)\r\n```\r\n\r\nUnder the hood it pickles the object to compute its hash, so it calls `__getstate__` when applicable.",
"I figured it out, the problem is deep copy itself uses pickle (unless you implement `__deepcopy__`). So when I changed `__getstate__` it started throwing an error.\r\n\r\nI'm sure there's a better way of doing this, but in order to return the `__dict__` without the non-pikelable pre-tokeniser and without modifying self I removed the pre-tokenizers, did a deep copy and then re-generated it.\r\n\r\nIt does work - although I noticed Hasher doesn't call `__hash__` if the object being hashed implements it which I feel it should? If it did I could return a hash of the tokenizers.json file instead.\r\n\r\n```\r\n def __getstate__(self):\r\n \"\"\"\r\n Removes pre_tokenizer since it cannot be pickled\r\n \"\"\"\r\n logger.debug(\"Copy state dict\")\r\n self.backend_tokenizer.pre_tokenizer = tokenizers.pre_tokenizers.Sequence([])\r\n out = copy.deepcopy(self.__dict__) #self.__dict__.copy()\r\n self.backend_tokenizer.pre_tokenizer = self._pre_tokenizer()\r\n\r\n return out\r\n```\r\n",
"I'm glad you figured something out :)\r\n\r\nRegarding hashing: we're not using hashing for the same purpose as the python `__hash__` purpose (which is in general for dictionary lookups). For example it is allowed for python hashing to not return the same hash across sessions, while our hashing must return the same hashes across sessions for the caching to work properly."
] | 1,623,998,846,000 | 1,624,456,069,000 | null | NONE | null | null | null | I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is mapped to a dataset, i.e. in the manner of run_mlm.py and other huggingface scripts.
The following reproduces the issue - most likely I'm missing something
A simulated tokeniser which can be pickled
```
class CustomTokenizer:
def __init__(self):
self.state = "init"
def __getstate__(self):
print("__getstate__ called")
out = self.__dict__.copy()
self.state = "pickled"
return out
def __setstate__(self, d):
print("__setstate__ called")
self.__dict__ = d
self.state = "restored"
tokenizer = CustomTokenizer()
```
Test that it actually works - prints "__getstate__ called" and "__setstate__ called"
```
import pickle
serialized = pickle.dumps(tokenizer)
restored = pickle.loads(serialized)
assert restored.state == "restored"
```
Simulate a function that tokenises examples, when dataset.map is called, this function
```
def tokenize_function(examples):
assert tokenizer.state == "restored" # this shouldn't fail but it does
output = tokenizer(examples) # this will fail as tokenizer isn't really a tokenizer
return output
```
Use map to simulate tokenization
```
import glob
from datasets import load_dataset
assert tokenizer.state == "restored"
train_files = glob.glob('train*.csv')
validation_files = glob.glob('validation*.csv')
datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files))
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
)
```
What's happening is I can see that __getstate__ is called but not __setstate__, so the state of `tokenize_function` is invalid at the point that it's actually executed. This doesn't matter as far as I can see for the standard tokenizers as they don't use __getstate__ / __setstate__. I'm not sure if there's another hook I'm supposed to implement as well?
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-22-a2aef4f74aaa> in <module>
8 tokenized_datasets = datasets.map(
9 tokenize_function,
---> 10 batched=True,
11 )
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc)
487 desc=desc,
488 )
--> 489 for k, dataset in self.items()
490 }
491 )
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0)
487 desc=desc,
488 )
--> 489 for k, dataset in self.items()
490 }
491 )
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1633 fn_kwargs=fn_kwargs,
1634 new_fingerprint=new_fingerprint,
-> 1635 desc=desc,
1636 )
1637 else:
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
184 }
185 # apply actual function
--> 186 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
187 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
188 # re-apply format to the output
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
395 # Call actual function
396
--> 397 out = func(self, *args, **kwargs)
398
399 # Update fingerprint of in-place transforms + update in-place history of transforms
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, desc)
1961 indices,
1962 check_same_num_examples=len(input_dataset.list_indexes()) > 0,
-> 1963 offset=offset,
1964 )
1965 except NumExamplesMismatch:
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
1853 effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset
1854 processed_inputs = (
-> 1855 function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1856 )
1857 if update_data is None:
<ipython-input-21-8ee4a8ba5b1b> in tokenize_function(examples)
1 def tokenize_function(examples):
----> 2 assert tokenizer.state == "restored"
3 tokenizer(examples)
4 return examples
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2516/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2515 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2515/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2515/comments | https://api.github.com/repos/huggingface/datasets/issues/2515/events | https://github.com/huggingface/datasets/pull/2515 | 924,435,447 | MDExOlB1bGxSZXF1ZXN0NjczMDc3NTIx | 2,515 | CRD3 dataset card | {
"login": "wilsonyhlee",
"id": 1937386,
"node_id": "MDQ6VXNlcjE5MzczODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1937386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wilsonyhlee",
"html_url": "https://github.com/wilsonyhlee",
"followers_url": "https://api.github.com/users/wilsonyhlee/followers",
"following_url": "https://api.github.com/users/wilsonyhlee/following{/other_user}",
"gists_url": "https://api.github.com/users/wilsonyhlee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wilsonyhlee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wilsonyhlee/subscriptions",
"organizations_url": "https://api.github.com/users/wilsonyhlee/orgs",
"repos_url": "https://api.github.com/users/wilsonyhlee/repos",
"events_url": "https://api.github.com/users/wilsonyhlee/events{/privacy}",
"received_events_url": "https://api.github.com/users/wilsonyhlee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,623,975,847,000 | 1,624,270,724,000 | 1,624,270,724,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2515",
"html_url": "https://github.com/huggingface/datasets/pull/2515",
"diff_url": "https://github.com/huggingface/datasets/pull/2515.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2515.patch",
"merged_at": 1624270724000
} | This PR adds additional information to the CRD3 dataset card. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2515/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2514 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2514/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2514/comments | https://api.github.com/repos/huggingface/datasets/issues/2514/events | https://github.com/huggingface/datasets/issues/2514 | 924,417,172 | MDU6SXNzdWU5MjQ0MTcxNzI= | 2,514 | Can datasets remove duplicated rows? | {
"login": "liuxinglan",
"id": 16516583,
"node_id": "MDQ6VXNlcjE2NTE2NTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/16516583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liuxinglan",
"html_url": "https://github.com/liuxinglan",
"followers_url": "https://api.github.com/users/liuxinglan/followers",
"following_url": "https://api.github.com/users/liuxinglan/following{/other_user}",
"gists_url": "https://api.github.com/users/liuxinglan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liuxinglan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liuxinglan/subscriptions",
"organizations_url": "https://api.github.com/users/liuxinglan/orgs",
"repos_url": "https://api.github.com/users/liuxinglan/repos",
"events_url": "https://api.github.com/users/liuxinglan/events{/privacy}",
"received_events_url": "https://api.github.com/users/liuxinglan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! For now this is probably the best option.\r\nWe might add a feature like this in the feature as well.\r\n\r\nDo you know any deduplication method that works on arbitrary big datasets without filling up RAM ?\r\nOtherwise we can have do the deduplication in memory like pandas but I feel like this is going to be limiting for some cases",
"Yes, I'd like to work on this feature once I'm done with #2500, but first I have to do some research, and see if the implementation wouldn't be too complex.\r\n\r\nIn the meantime, maybe [this lib](https://github.com/TomScheffers/pyarrow_ops) can help. However, note that this lib operates directly on pyarrow tables and relies only on `hash` to find duplicates (e.g. `-1` and `-2` have the same hash in Python 3, so this lib will treat them as duplicates), which doesn't make much sense.",
"> Hi ! For now this is probably the best option.\r\n> We might add a feature like this in the feature as well.\r\n> \r\n> Do you know any deduplication method that works on arbitrary big datasets without filling up RAM ?\r\n> Otherwise we can have do the deduplication in memory like pandas but I feel like this is going to be limiting for some cases\r\n\r\nGreat if this is can be done. Thanks!!\r\n\r\nNot sure if you are asking me. In any case I don't know of any unfortunately :( in practice if data is really large we normally do it with spark (only for info. I understand this is not useful in developing this library..)",
"Hello,\r\n\r\nI'm also interested in this feature.\r\nHas there been progress on this issue?\r\n\r\nCould we use a similar trick as above, but with a better hashing algorithm like SHA?\r\n\r\nWe could also use a [bloom filter](https://en.wikipedia.org/wiki/Bloom_filter), should we care a lot about collision in this case?",
"For reference, we can get a solution fairly easily if we assume that we can hold in memory all unique values. \r\n\r\n```python\r\nfrom datasets import Dataset\r\nfrom itertools import cycle\r\nfrom functools import partial\r\n\r\nmemory = set()\r\ndef is_unique(elem:Any , column: str, memory: set) -> bool:\r\n if elem[column] in memory:\r\n return False\r\n else:\r\n memory.add(elem[column])\r\n return True\r\n\r\n# Example dataset\r\nds = Dataset.from_dict({\"col1\" : [sent for i, sent in zip(range(10), cycle([\"apple\", \"orange\", \"pear\"]))],\r\n \"col2\": [i % 5 for i in range(10)]})\r\n\r\n# Drop duplicates in `ds` on \"col1\"\r\nds2 = ds.filter(partial(is_unique, column=\"col1\", memory=memory))\r\n```\r\n\r\nOf course, we can improve the API so that we can introduce `Dataset.drop_duplicates`.\r\nFor the parallel version, we can use a shared memory set.",
"An approach that works assuming you can hold the all the unique document hashes in memory:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndef get_hash(example):\r\n \"\"\"Get hash of content field.\"\"\"\r\n return {\"hash\": hash(example[\"content\"])} # can use any hashing function here\r\n \r\ndef check_uniques(example, uniques):\r\n \"\"\"Check if current hash is still in set of unique hashes and remove if true.\"\"\"\r\n if example[\"hash\"] in uniques:\r\n uniques.remove(example[\"hash\"])\r\n return True\r\n else:\r\n return False\r\n\r\nds = load_dataset(\"some_dataset\")\r\nds = ds.map(get_hash)\r\nuniques = set(ds.unique(\"hash\"))\r\nds_filter = ds.filter(check_uniques, fn_kwargs={\"uniques\": uniques})\r\n```\r\nIf the `uniques` could be stored in arrow then no additional memory would used at all but I don't know if this is possible.\r\n"
] | 1,623,972,938,000 | 1,638,434,361,000 | null | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that..
**Describe the solution you'd like**
have a functionality of " remove duplicated rows"
**Describe alternatives you've considered**
convert dataset to pandas, remove duplicate, and convert back...
**Additional context**
no | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2514/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2513 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2513/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2513/comments | https://api.github.com/repos/huggingface/datasets/issues/2513/events | https://github.com/huggingface/datasets/issues/2513 | 924,174,413 | MDU6SXNzdWU5MjQxNzQ0MTM= | 2,513 | Corelation should be Correlation | {
"login": "colbym-MM",
"id": 71514164,
"node_id": "MDQ6VXNlcjcxNTE0MTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/71514164?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/colbym-MM",
"html_url": "https://github.com/colbym-MM",
"followers_url": "https://api.github.com/users/colbym-MM/followers",
"following_url": "https://api.github.com/users/colbym-MM/following{/other_user}",
"gists_url": "https://api.github.com/users/colbym-MM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/colbym-MM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/colbym-MM/subscriptions",
"organizations_url": "https://api.github.com/users/colbym-MM/orgs",
"repos_url": "https://api.github.com/users/colbym-MM/repos",
"events_url": "https://api.github.com/users/colbym-MM/events{/privacy}",
"received_events_url": "https://api.github.com/users/colbym-MM/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @colbym-MM, thanks for reporting. We are fixing it."
] | 1,623,950,928,000 | 1,624,005,835,000 | 1,624,005,835,000 | NONE | null | null | null | https://github.com/huggingface/datasets/blob/0e87e1d053220e8ecddfa679bcd89a4c7bc5af62/metrics/matthews_correlation/matthews_correlation.py#L66 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2513/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2512 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2512/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2512/comments | https://api.github.com/repos/huggingface/datasets/issues/2512/events | https://github.com/huggingface/datasets/issues/2512 | 924,069,353 | MDU6SXNzdWU5MjQwNjkzNTM= | 2,512 | seqeval metric does not work with a recent version of sklearn: classification_report() got an unexpected keyword argument 'output_dict' | {
"login": "avidale",
"id": 8642136,
"node_id": "MDQ6VXNlcjg2NDIxMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8642136?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avidale",
"html_url": "https://github.com/avidale",
"followers_url": "https://api.github.com/users/avidale/followers",
"following_url": "https://api.github.com/users/avidale/following{/other_user}",
"gists_url": "https://api.github.com/users/avidale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avidale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avidale/subscriptions",
"organizations_url": "https://api.github.com/users/avidale/orgs",
"repos_url": "https://api.github.com/users/avidale/repos",
"events_url": "https://api.github.com/users/avidale/events{/privacy}",
"received_events_url": "https://api.github.com/users/avidale/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Sorry, I was using an old version of sequeval"
] | 1,623,944,162,000 | 1,623,944,767,000 | 1,623,944,767,000 | NONE | null | null | null | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
seqeval = load_metric("seqeval")
seqeval.compute(predictions=[['A']], references=[['A']])
```
## Expected results
The function computes a dict with metrics
## Actual results
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-39-69a57f5cf06f> in <module>
1 from datasets import load_dataset, load_metric
2 seqeval = load_metric("seqeval")
----> 3 seqeval.compute(predictions=[['A']], references=[['A']])
~/p3/lib/python3.7/site-packages/datasets/metric.py in compute(self, *args, **kwargs)
396 references = self.data["references"]
397 with temp_seed(self.seed):
--> 398 output = self._compute(predictions=predictions, references=references, **kwargs)
399
400 if self.buf_writer is not None:
~/.cache/huggingface/modules/datasets_modules/metrics/seqeval/81eda1ff004361d4fa48754a446ec69bb7aa9cf4d14c7215f407d1475941c5ff/seqeval.py in _compute(self, predictions, references, suffix)
95
96 def _compute(self, predictions, references, suffix=False):
---> 97 report = classification_report(y_true=references, y_pred=predictions, suffix=suffix, output_dict=True)
98 report.pop("macro avg")
99 report.pop("weighted avg")
TypeError: classification_report() got an unexpected keyword argument 'output_dict'
```
## Environment info
sklearn=0.24
datasets=1.1.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2512/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2511 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2511/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2511/comments | https://api.github.com/repos/huggingface/datasets/issues/2511/events | https://github.com/huggingface/datasets/issues/2511 | 923,762,133 | MDU6SXNzdWU5MjM3NjIxMzM= | 2,511 | Add C4 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Update on this: I'm computing the checksums of the data files. It will be available soon",
"Added in #2575 :)"
] | 1,623,925,864,000 | 1,625,488,618,000 | 1,625,488,617,000 | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** *C4*
- **Description:** *https://github.com/allenai/allennlp/discussions/5056*
- **Paper:** *https://arxiv.org/abs/1910.10683*
- **Data:** *https://huggingface.co/datasets/allenai/c4*
- **Motivation:** *Used a lot for pretraining*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
Should fix https://github.com/huggingface/datasets/issues/1710 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2511/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2510 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2510/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2510/comments | https://api.github.com/repos/huggingface/datasets/issues/2510/events | https://github.com/huggingface/datasets/pull/2510 | 923,735,485 | MDExOlB1bGxSZXF1ZXN0NjcyNDY3MzY3 | 2,510 | Add align_labels_with_mapping to DatasetDict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,623,924,215,000 | 1,623,926,725,000 | 1,623,926,724,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2510",
"html_url": "https://github.com/huggingface/datasets/pull/2510",
"diff_url": "https://github.com/huggingface/datasets/pull/2510.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2510.patch",
"merged_at": 1623926724000
} | https://github.com/huggingface/datasets/pull/2457 added the `Dataset.align_labels_with_mapping` method.
In this PR I also added `DatasetDict.align_labels_with_mapping` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2510/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2509 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2509/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2509/comments | https://api.github.com/repos/huggingface/datasets/issues/2509/events | https://github.com/huggingface/datasets/pull/2509 | 922,846,035 | MDExOlB1bGxSZXF1ZXN0NjcxNjcyMzU5 | 2,509 | Fix fingerprint when moving cache dir | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Windows, why are you doing this to me ?",
"Thanks @lhoestq, I'm starting reviewing this PR.",
"Yea issues on windows are about long paths, not long filenames.\r\nWe can make sure the lock filenames are not too long, but not for the paths",
"Took your suggestions into account @albertvillanova :)"
] | 1,623,861,909,000 | 1,624,287,904,000 | 1,624,287,903,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2509",
"html_url": "https://github.com/huggingface/datasets/pull/2509",
"diff_url": "https://github.com/huggingface/datasets/pull/2509.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2509.patch",
"merged_at": 1624287903000
} | The fingerprint of a dataset changes if the cache directory is moved.
I fixed that by setting the fingerprint to be the hash of:
- the relative cache dir (dataset_name/version/config_id)
- the requested split
Close #2496
I had to fix an issue with the filelock filename that was too long (>255). It prevented the tests to run on my machine. I just added `hash_filename_if_too_long` in case this happens, to not get filenames longer than 255.
We usually have long filenames for filelocks because they are named after the path that is being locked. In case the path is a cache directory that has long directory names, then the filelock filename could en up being very long. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2509/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2508 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2508/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2508/comments | https://api.github.com/repos/huggingface/datasets/issues/2508/events | https://github.com/huggingface/datasets/issues/2508 | 921,863,173 | MDU6SXNzdWU5MjE4NjMxNzM= | 2,508 | Load Image Classification Dataset from Local | {
"login": "Jacobsolawetz",
"id": 8428198,
"node_id": "MDQ6VXNlcjg0MjgxOTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8428198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jacobsolawetz",
"html_url": "https://github.com/Jacobsolawetz",
"followers_url": "https://api.github.com/users/Jacobsolawetz/followers",
"following_url": "https://api.github.com/users/Jacobsolawetz/following{/other_user}",
"gists_url": "https://api.github.com/users/Jacobsolawetz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jacobsolawetz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jacobsolawetz/subscriptions",
"organizations_url": "https://api.github.com/users/Jacobsolawetz/orgs",
"repos_url": "https://api.github.com/users/Jacobsolawetz/repos",
"events_url": "https://api.github.com/users/Jacobsolawetz/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jacobsolawetz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
},
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! Is this folder structure a standard, a bit like imagenet ?\r\nIn this case maybe we can consider having a dataset loader for cifar-like, imagenet-like, squad-like, conll-like etc. datasets ?\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nmy_custom_cifar = load_dataset(\"cifar_like\", data_dir=\"path/to/data/dir\")\r\n```\r\n\r\nLet me know what you think",
"Yep that would be sweet - closing for now as we found a workaround. ",
"@lhoestq I think we'll want a generic `image-folder` dataset (same as 'imagenet-like'). This is like `torchvision.datasets.ImageFolder`, and is something vision folks are used to seeing.",
"Opening this back up, since I'm planning on tackling this. Already posted a quick version of it on my account on the hub.\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('nateraw/image-folder', data_files='PetImages/')\r\n```",
"Bumping this one following our recent discussion @mariosasko @nateraw :)"
] | 1,623,797,013,000 | 1,646,152,184,000 | 1,646,152,184,000 | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader.
**Describe the solution you'd like**
Given a folder structure with images of each class in each folder, the ability to load these folders into a HuggingFace dataset like "cifar10".
**Describe alternatives you've considered**
Implement ViT training outside of the HuggingFace Trainer and without datasets (we did this but prefer to stay on the main path)
Write custom data loader logic
**Additional context**
We're training ViT on custom dataset
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2508/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2507 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2507/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2507/comments | https://api.github.com/repos/huggingface/datasets/issues/2507/events | https://github.com/huggingface/datasets/pull/2507 | 921,441,962 | MDExOlB1bGxSZXF1ZXN0NjcwNDQ0MDgz | 2,507 | Rearrange JSON field names to match passed features schema field names | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [] | 1,623,766,202,000 | 1,623,840,469,000 | 1,623,840,469,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2507",
"html_url": "https://github.com/huggingface/datasets/pull/2507",
"diff_url": "https://github.com/huggingface/datasets/pull/2507.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2507.patch",
"merged_at": 1623840469000
} | This PR depends on PR #2453 (which must be merged first).
Close #2366. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2507/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2506/comments | https://api.github.com/repos/huggingface/datasets/issues/2506/events | https://github.com/huggingface/datasets/pull/2506 | 921,435,598 | MDExOlB1bGxSZXF1ZXN0NjcwNDM4NTgx | 2,506 | Add course banner | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,623,765,834,000 | 1,623,774,336,000 | 1,623,774,335,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2506",
"html_url": "https://github.com/huggingface/datasets/pull/2506",
"diff_url": "https://github.com/huggingface/datasets/pull/2506.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2506.patch",
"merged_at": 1623774335000
} | This PR adds a course banner similar to the one you can now see in the [Transformers repo](https://github.com/huggingface/transformers) that links to the course. Let me know if placement seems right to you or not, I can move it just below the badges too. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2506/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2506/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2505/comments | https://api.github.com/repos/huggingface/datasets/issues/2505/events | https://github.com/huggingface/datasets/pull/2505 | 921,234,797 | MDExOlB1bGxSZXF1ZXN0NjcwMjY2NjQy | 2,505 | Make numpy arrow extractor faster | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks like we have a nice speed up in some benchmarks. For example:\r\n- `read_formatted numpy 5000`: 4.584777 sec -> 0.487113 sec\r\n- `read_formatted torch 5000`: 4.565676 sec -> 1.289514 sec",
"Can we convert this draft to PR @lhoestq ?",
"Ready for review ! cc @vblagoje",
"@lhoestq I tried the branch and it works for me. Although performance trace now shows a speedup, the overall pre-training speed up is minimal. But that's on my plate to explore further. ",
"Thanks for investigating @vblagoje \r\n\r\n@albertvillanova , do you have any comments on this PR ? Otherwise I think we can merge it"
] | 1,623,751,892,000 | 1,624,874,019,000 | 1,624,874,018,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2505",
"html_url": "https://github.com/huggingface/datasets/pull/2505",
"diff_url": "https://github.com/huggingface/datasets/pull/2505.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2505.patch",
"merged_at": 1624874018000
} | I changed the NumpyArrowExtractor to call directly to_numpy and see if it can lead to speed-ups as discussed in https://github.com/huggingface/datasets/issues/2498
This could make the numpy/torch/tf/jax formatting faster | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2505/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2505/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2503 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2503/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2503/comments | https://api.github.com/repos/huggingface/datasets/issues/2503/events | https://github.com/huggingface/datasets/issues/2503 | 920,636,186 | MDU6SXNzdWU5MjA2MzYxODY= | 2,503 | SubjQA wrong boolean values in entries | {
"login": "arnaudstiegler",
"id": 26485052,
"node_id": "MDQ6VXNlcjI2NDg1MDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/26485052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arnaudstiegler",
"html_url": "https://github.com/arnaudstiegler",
"followers_url": "https://api.github.com/users/arnaudstiegler/followers",
"following_url": "https://api.github.com/users/arnaudstiegler/following{/other_user}",
"gists_url": "https://api.github.com/users/arnaudstiegler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arnaudstiegler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arnaudstiegler/subscriptions",
"organizations_url": "https://api.github.com/users/arnaudstiegler/orgs",
"repos_url": "https://api.github.com/users/arnaudstiegler/repos",
"events_url": "https://api.github.com/users/arnaudstiegler/events{/privacy}",
"received_events_url": "https://api.github.com/users/arnaudstiegler/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @arnaudstiegler, thanks for reporting. I'm investigating it.",
"@arnaudstiegler I have just checked that these mismatches are already present in the original dataset: https://github.com/megagonlabs/SubjQA\r\n\r\nWe are going to contact the dataset owners to report this.",
"I have:\r\n- opened an issue in their repo: https://github.com/megagonlabs/SubjQA/issues/3\r\n- written an email to all the paper authors",
"Please [see my response](https://github.com/megagonlabs/SubjQA/issues/3#issuecomment-905160010). There will be a fix in a couple of days."
] | 1,623,692,566,000 | 1,629,863,526,000 | null | NONE | null | null | null | ## Describe the bug
SubjQA seems to have a boolean that's consistently wrong.
It defines:
- question_subj_level: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective).
- is_ques_subjective: A boolean subjectivity label derived from question_subj_level (i.e., scores below 4 are considered as subjective)
However, `is_ques_subjective` seems to have wrong values in the entire dataset.
For instance, in the example in the dataset card, we have:
- "question_subj_level": 2
- "is_ques_subjective": false
However, according to the description, the question should be subjective since the `question_subj_level` is below 4
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2503/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2502 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2502/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2502/comments | https://api.github.com/repos/huggingface/datasets/issues/2502/events | https://github.com/huggingface/datasets/pull/2502 | 920,623,572 | MDExOlB1bGxSZXF1ZXN0NjY5NzQ1MDA5 | 2,502 | JAX integration | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,623,691,463,000 | 1,624,292,150,000 | 1,624,292,149,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2502",
"html_url": "https://github.com/huggingface/datasets/pull/2502",
"diff_url": "https://github.com/huggingface/datasets/pull/2502.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2502.patch",
"merged_at": 1624292148000
} | Hi !
I just added the "jax" formatting, as we already have for pytorch, tensorflow, numpy (and also pandas and arrow).
It does pretty much the same thing as the pytorch formatter except it creates jax.numpy.ndarray objects.
```python
from datasets import Dataset
d = Dataset.from_dict({"foo": [[0., 1., 2.]]})
d = d.with_format("jax")
d[0]
# {'foo': DeviceArray([0., 1., 2.], dtype=float32)}
```
A few details:
- The default integer precision for jax depends on the jax configuration `jax_enable_x64` (see [here](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#double-64bit-precision)), I took that into account. Unless `jax_enable_x64` is specified, it is int32 by default
- AFAIK it's not possible to do a full conversion from arrow data to jax data. We are doing arrow -> numpy -> jax but the numpy -> jax part doesn't do zero copy unfortutanely (see [here](https://github.com/google/jax/issues/4486))
- the env var for disabling JAX is `USE_JAX`. However I noticed that in `transformers` it is `USE_FLAX`. This is not an issue though IMO
I also updated `convert_to_python_objects` to allow users to pass jax.numpy.ndarray objects to build a dataset.
Since the `convert_to_python_objects` method became slow because it's the time when pytorch, tf (and now jax) are imported, I fixed it by checking the `sys.modules` to avoid unecessary import of pytorch, tf or jax.
Close #2495 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2502/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2501/comments | https://api.github.com/repos/huggingface/datasets/issues/2501/events | https://github.com/huggingface/datasets/pull/2501 | 920,579,634 | MDExOlB1bGxSZXF1ZXN0NjY5NzA3Nzc0 | 2,501 | Add Zenodo metadata file with license | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [] | 1,623,688,092,000 | 1,623,689,382,000 | 1,623,689,382,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2501",
"html_url": "https://github.com/huggingface/datasets/pull/2501",
"diff_url": "https://github.com/huggingface/datasets/pull/2501.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2501.patch",
"merged_at": 1623689382000
} | This Zenodo metadata file fixes the name of the `Datasets` license appearing in the DOI as `"Apache-2.0"`, which otherwise by default is `"other-open"`.
Close #2472. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2501/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2500/comments | https://api.github.com/repos/huggingface/datasets/issues/2500/events | https://github.com/huggingface/datasets/pull/2500 | 920,471,411 | MDExOlB1bGxSZXF1ZXN0NjY5NjE2MjQ1 | 2,500 | Add load_dataset_builder | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @mariosasko, thanks for taking on this issue.\r\n\r\nJust a few logistic suggestions, as you are one of our most active contributors ❤️ :\r\n- When you start working on an issue, you can self-assign it to you by commenting on the issue page with the keyword: `#self-assign`; we have implemented a GitHub Action to take care of that... 😉 \r\n- When you are still working on your Pull Request, instead of using the `[WIP]` in the PR name, you can instead create a *draft* pull request: use the drop-down (on the right of the *Create Pull Request* button) and select **Create Draft Pull Request**, then click **Draft Pull Request**.\r\n\r\nI hope you find these hints useful. 🤗 ",
"@albertvillanova Thanks for the tips. When creating this PR, it slipped my mind that this should be a draft. GH has an option to convert already created PRs to draft PRs, but this requires write access for the repo, so maybe you can help.",
"Ready for the review!\r\n\r\nOne additional change. I've modified the `camelcase_to_snakecase`/`snakecase_to_camelcase` conversion functions to fix conversion of the names with 2 or more underscores (e.g. `camelcase_to_snakecase(\"__DummyDataset__\")` would return `___dummy_dataset__`; notice one extra underscore at the beginning). The implementation is based on the [inflection](https://pypi.org/project/inflection/) library.\r\n",
"Thank you for adding this feature, @mariosasko - this is really awesome!\r\n\r\nTried with:\r\n```\r\npython -c \"from datasets import load_dataset_builder; b = load_dataset_builder('openwebtext-10k'); print(b.cache_dir)\"\r\nUsing the latest cached version of the module from /home/stas/.cache/huggingface/modules/datasets_modules/datasets\r\n/openwebtext-10k/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b (last modified on Wed May 12 \r\n20:22:53 2021) \r\n\r\nsince it couldn't be found locally at openwebtext-10k/openwebtext-10k.py \r\n\r\nor remotely (FileNotFoundError).\r\n\r\n/home/stas/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b\r\n```\r\n\r\nThe logger message (edited by me to add new lines to point the issues out) is a bit confusing to the user - that is what does `FileNotFoundError` refer to? \r\n\r\n1. May be replace `FileNotFoundError` with where it was looking for a file online. But then the remote file is there - it's found \r\n2. I'm not sure why it says \"since it couldn't be found locally\" - as it is locally found at the cache folder and again what does \" locally at openwebtext-10k/openwebtext-10k.py\" mean - i.e. where does it look for it? Is it `./openwebtext-10k/openwebtext-10k.py` it's looking for? or in some specific dir?\r\n\r\nIf the cached version always supersedes any other versions perhaps this is what it should say?\r\n```\r\nfound cached version at xxx, not looking for a local at yyy, not downloading remote at zzz\r\n```",
"Hi ! Thanks for the comments\r\n\r\nRegarding your last message:\r\nYou must pass `stas/openwebtext-10k` as in `load_dataset` instead of `openwebtext-10k`. Otherwise it doesn't know how to retrieve the builder from the HF Hub.\r\n\r\nWhen you specify a dataset name without a slash, it tries to load a canonical dataset or it looks locally at ./openwebtext-10k/openwebtext-10k.py\r\nHere since `openwebtext-10k` is not a canonical dataset and doesn't exist locally at ./openwebtext-10k/openwebtext-10k.py: it raised a FileNotFoundError.\r\nAs a fallback it managed to find the dataset script in your cache and it used this one.",
"Oh, I see, so I actually used an incorrect input. so it was a user error. Correcting it:\r\n\r\n```\r\npython -c \"from datasets import load_dataset_builder; b = load_dataset_builder('stas/openwebtext-10k'); print(b.cache_dir)\"\r\n/home/stas/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b\r\n```\r\n\r\nNow there is no logger message. Got it!\r\n\r\nOK, I'm not sure the magical recovery it did in first place is most beneficial in the long run. I'd have rather it failed and said: \"incorrect input there is no such dataset as 'openwebtext-10k' at <this path> or <this url>\" - because if it doesn't fail I may leave it in the code and it'll fail later when another user tries to use my code and won't have the cache. Does it make sense? Giving me `this url` allows me to go to the datasets hub and realize that the dataset is missing the username qualifier.\r\n\r\n> Here since openwebtext-10k is not a canonical dataset and doesn't exist locally at ./openwebtext-10k/openwebtext-10k.py: it raised a FileNotFoundError.\r\n\r\nExcept it slapped the exception name to ` remotely (FileNotFoundError).` which makes no sense.\r\n\r\nPlus for the local it's not clear where is it looking relatively too when it gets `FileNotFoundError` - perhaps it'd help to use absolute path and use it in the message?\r\n\r\n---------------\r\n\r\nFinally, the logger format is not set up so the user gets a warning w/o knowing it's a warning. As you can see it's missing the WARNING pre-amble in https://github.com/huggingface/datasets/pull/2500#issuecomment-874250500\r\n\r\ni.e. I had no idea it was warning me of something, I was just trying to make sense of the message that's why I started the discussion and otherwise I'd have completely missed the point of me making an error."
] | 1,623,680,865,000 | 1,625,789,296,000 | 1,625,481,958,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2500",
"html_url": "https://github.com/huggingface/datasets/pull/2500",
"diff_url": "https://github.com/huggingface/datasets/pull/2500.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2500.patch",
"merged_at": 1625481957000
} | Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself.
TODOs:
- [x] Add docstring and entry in the docs
- [x] Add tests
Closes #2484
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2500/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2500/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2499 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2499/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2499/comments | https://api.github.com/repos/huggingface/datasets/issues/2499/events | https://github.com/huggingface/datasets/issues/2499 | 920,413,021 | MDU6SXNzdWU5MjA0MTMwMjE= | 2,499 | Python Programming Puzzles | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"👀 @TalSchuster",
"Thanks @VictorSanh!\r\nThere's also a [notebook](https://aka.ms/python_puzzles) and [demo](https://aka.ms/python_puzzles_study) available now to try out some of the puzzles"
] | 1,623,677,238,000 | 1,623,780,854,000 | null | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** Python Programming Puzzles
- **Description:** Programming challenge called programming puzzles, as an objective and comprehensive evaluation of program synthesis
- **Paper:** https://arxiv.org/pdf/2106.05784.pdf
- **Data:** https://github.com/microsoft/PythonProgrammingPuzzles ([Scrolling through the data](https://github.com/microsoft/PythonProgrammingPuzzles/blob/main/problems/README.md))
- **Motivation:** Spans a large range of difficulty, problems, and domains. A useful resource for evaluation as we don't have a clear understanding of the abilities and skills of extremely large LMs.
Note: it's a growing dataset (contributions are welcome), so we'll need careful versioning for this dataset.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2499/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2499/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2498 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2498/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2498/comments | https://api.github.com/repos/huggingface/datasets/issues/2498/events | https://github.com/huggingface/datasets/issues/2498 | 920,411,285 | MDU6SXNzdWU5MjA0MTEyODU= | 2,498 | Improve torch formatting performance | {
"login": "vblagoje",
"id": 458335,
"node_id": "MDQ6VXNlcjQ1ODMzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vblagoje",
"html_url": "https://github.com/vblagoje",
"followers_url": "https://api.github.com/users/vblagoje/followers",
"following_url": "https://api.github.com/users/vblagoje/following{/other_user}",
"gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions",
"organizations_url": "https://api.github.com/users/vblagoje/orgs",
"repos_url": "https://api.github.com/users/vblagoje/repos",
"events_url": "https://api.github.com/users/vblagoje/events{/privacy}",
"received_events_url": "https://api.github.com/users/vblagoje/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"That’s interesting thanks, let’s see what we can do. Can you detail your last sentence? I’m not sure I understand it well.",
"Hi ! I just re-ran a quick benchmark and using `to_numpy()` seems to be faster now:\r\n\r\n```python\r\nimport pyarrow as pa # I used pyarrow 3.0.0\r\nimport numpy as np\r\n\r\nn, max_length = 1_000, 512\r\nlow, high, size = 0, 2 << 16, (n, max_length)\r\n\r\ntable = pa.Table.from_pydict({\r\n \"input_ids\": np.random.default_rng(42).integers(low=low, high=high, size=size).tolist()\r\n})\r\n\r\n\r\n%%timeit\r\n_ = table.to_pandas()[\"input_ids\"].to_numpy()\r\n# 1.44 ms ± 80.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\r\n\r\n%%timeit\r\n_ = table[\"input_ids\"].to_pandas().to_numpy()\r\n# 461 µs ± 14.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\r\n\r\n%%timeit\r\n_ = table[\"input_ids\"].to_numpy()\r\n# 317 µs ± 5.06 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\r\n```\r\n\r\nCurrently the conversion from arrow to numpy is done in the NumpyArrowExtractor here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/d6d0ede9486ffad7944642ca9a326e058b676788/src/datasets/formatting/formatting.py#L143-L166\r\n\r\nLet's update the NumpyArrowExtractor to call `to_numpy` directly and see how our github benchmarks evolve ?__",
"Sounds like a plan @lhoestq If you create a PR I'll pick it up and try it out right away! ",
"@lhoestq I can also prepare the PR, just lmk. ",
"I’m not exactly sure how to read the graph but it seems that to_categorical take a lot of time here. Could you share more informations on the features/stats of your datasets so we could maybe design a synthetic datasets that looks more similar for debugging testing?",
"I created https://github.com/huggingface/datasets/pull/2505 if you want to play with it @vblagoje ",
"> I’m not exactly sure how to read the graph but it seems that to_categorical take a lot of time here. Could you share more informations on the features/stats of your datasets so we could maybe design a synthetic datasets that looks more similar for debugging testing?\r\n\r\n@thomwolf starting from the top, each rectangle represents the cumulative amount of it takes to execute the method call. Therefore, format_batch in torch_formatter.py takes ~20 sec, and the largest portion of that call is taken by to_pandas call and the smaller portion (grey rectangle) by the other method invocation(s) in format_batch (series_to_numpy etc). \r\n\r\nFeatures of the dataset are BERT pre-training model input columns i.e:\r\n```\r\nf = Features({ \r\n \"input_ids\": Sequence(feature=Value(dtype=\"int32\")), \r\n \"attention_mask\": Sequence(feature=Value(dtype=\"int8\")), \r\n \"token_type_ids\": Sequence(feature=Value(dtype=\"int8\")), \r\n \"labels\": Sequence(feature=Value(dtype=\"int32\")), \r\n \"next_sentence_label\": Value(dtype=\"int8\")\r\n})\r\n```\r\n\r\nI'll work with @lhoestq till we get to the bottom of this one. \r\n ",
"@lhoestq the proposed branch is faster, but overall training speedup is a few percentage points. I couldn't figure out how to include the GitHub branch into setup.py, so I couldn't start NVidia optimized Docker-based pre-training run. But on bare metal, there is a slight improvement. I'll do some more performance traces. ",
"Hi @vblagoje, to install Datasets from @lhoestq PR reference #2505, you can use:\r\n```shell\r\npip install git+ssh://git@github.com/huggingface/datasets.git@refs/pull/2505/head#egg=datasets\r\n```",
"Hey @albertvillanova yes thank you, I am aware, I can easily pull it from a terminal command line but then I can't automate docker image builds as dependencies are picked up from setup.py and for some reason setup.py doesn't accept this string format.",
"@vblagoje in that case, you can add this to your `setup.py`:\r\n```python\r\n install_requires=[\r\n \"datasets @ git+ssh://git@github.com/huggingface/datasets.git@refs/pull/2505/head\",\r\n```",
"@lhoestq @thomwolf @albertvillanova The new approach is definitely faster, dataloader now takes less than 3% cumulative time (pink rectangle two rectangles to the right of tensor.py backward invocation)\r\n\r\n![Screen Shot 2021-06-16 at 3 05 06 PM](https://user-images.githubusercontent.com/458335/122224432-19de4700-ce82-11eb-982f-d45d4bcc1e41.png)\r\n\r\nWhen we drill down into dataloader next invocation we get:\r\n\r\n![Screen Shot 2021-06-16 at 3 09 56 PM](https://user-images.githubusercontent.com/458335/122224976-a1c45100-ce82-11eb-8d40-59194740d616.png)\r\n\r\nAnd finally format_batch:\r\n\r\n![Screen Shot 2021-06-16 at 3 11 07 PM](https://user-images.githubusercontent.com/458335/122225132-cae4e180-ce82-11eb-8a16-967ab7c1c2aa.png)\r\n\r\n\r\nNot sure this could be further improved but this is definitely a decent step forward.\r\n\r\n",
"> ```python\r\n> datasets @ git+ssh://git@github.com/huggingface/datasets.git@refs/pull/2505/head\r\n> ```\r\n\r\n@albertvillanova how would I replace datasets dependency in https://github.com/huggingface/transformers/blob/master/setup.py as the above approach is not working. ",
"@vblagoje I tested my proposed approach before posting it here and it worked for me. \r\n\r\nIs it not working in your case because of the SSH protocol? In that case you could try the same approach but using HTTPS:\r\n```\r\n\"datasets @ git+https://github.com/huggingface/datasets.git@refs/pull/2505/head\",\r\n``` ",
"Also note the blanks before and after the `@`.",
"@albertvillanova of course it works. Apologies. I needed to change datasets in all deps references , like [here](https://github.com/huggingface/transformers/blob/master/setup.py#L235) for example. "
] | 1,623,677,124,000 | 1,624,269,294,000 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia and BookCorpus datasets. The training machines are similar to DGX-1 workstations. We use HF trainer torch.distributed training approach on a single machine with 8 GPUs.
The current performance is about 30% slower than NVidia optimized BERT [examples](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling) baseline. Quite a bit of customized code and training loop tricks were used to achieve the baseline performance. It would be great to achieve the same performance while using nothing more than off the shelf HF ecosystem. Perhaps, in the future, with @stas00 work on deepspeed integration, it could even be exceeded.
**Describe the solution you'd like**
Using profiling tools we've observed that appx. 25% of cumulative run time is spent on data loader next call.
![dataloader_next](https://user-images.githubusercontent.com/458335/121895543-59742a00-ccee-11eb-85fb-f07715e3f1f6.png)
As you can observe most of the data loader next call is spent in HF datasets torch_formatter.py format_batch call.
Digging a bit deeper into format_batch we can see the following profiler data:
![torch_formatter](https://user-images.githubusercontent.com/458335/121895944-c7b8ec80-ccee-11eb-95d5-5875c5716c30.png)
Once again, a lot of time is spent in pyarrow table conversion to pandas which seems like an intermediary step. Offline @lhoestq told me that this approach was, for some unknown reason, faster than direct to numpy conversion.
**Describe alternatives you've considered**
I am not familiar with pyarrow and have not yet considered the alternatives to the current approach.
Most of the online advice around data loader performance improvements revolve around increasing number of workers, using pin memory for copying tensors from host device to gpus but we've already tried these avenues without much performance improvement. Weights & Biases dashboard for the pre-training task reports CPU utilization of ~ 10%, GPUs are completely saturated (GPU utilization is above 95% on all GPUs), while disk utilization is above 90%.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2498/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2497/comments | https://api.github.com/repos/huggingface/datasets/issues/2497/events | https://github.com/huggingface/datasets/pull/2497 | 920,250,382 | MDExOlB1bGxSZXF1ZXN0NjY5NDI3OTU3 | 2,497 | Use default cast for sliced list arrays if pyarrow >= 4 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [
"I believe we don't use PyArrow >= 4.0.0 because of some segfault issues:\r\nhttps://github.com/huggingface/datasets/blob/1206ffbcd42dda415f6bfb3d5040708f50413c93/setup.py#L78\r\nCan you confirm @lhoestq ?",
"@SBrandeis pyarrow version 4.0.1 has fixed that issue: #2489 😉 "
] | 1,623,664,967,000 | 1,623,780,378,000 | 1,623,680,677,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2497",
"html_url": "https://github.com/huggingface/datasets/pull/2497",
"diff_url": "https://github.com/huggingface/datasets/pull/2497.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2497.patch",
"merged_at": 1623680677000
} | From pyarrow version 4, it is supported to cast sliced lists.
This PR uses default pyarrow cast in Datasets to cast sliced list arrays if pyarrow version is >= 4.
In relation with PR #2461 and #2490.
cc: @lhoestq, @abhi1thakur, @SBrandeis | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2497/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2496 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2496/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2496/comments | https://api.github.com/repos/huggingface/datasets/issues/2496/events | https://github.com/huggingface/datasets/issues/2496 | 920,216,314 | MDU6SXNzdWU5MjAyMTYzMTQ= | 2,496 | Dataset fingerprint changes after moving the cache directory, which prevent cache reload when using `map` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,623,662,426,000 | 1,624,287,903,000 | 1,624,287,903,000 | MEMBER | null | null | null | `Dataset.map` uses the dataset fingerprint (a hash) for caching.
However the fingerprint seems to change when someone moves the cache directory of the dataset.
This is because it uses the default fingerprint generation:
1. the dataset path is used to get the fingerprint
2. the modification times of the arrow file is also used to get the fingerprint
To fix that we could set the fingerprint of the dataset to be a hash of (<dataset_name>, <config_name>, <version>, <script_hash>), i.e. a hash of the the cache path relative to the cache directory. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2496/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2496/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2495/comments | https://api.github.com/repos/huggingface/datasets/issues/2495/events | https://github.com/huggingface/datasets/issues/2495 | 920,170,030 | MDU6SXNzdWU5MjAxNzAwMzA= | 2,495 | JAX formatting | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,623,659,527,000 | 1,624,292,149,000 | 1,624,292,149,000 | MEMBER | null | null | null | We already support pytorch, tensorflow, numpy, pandas and arrow dataset formatting. Let's add jax as well | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2495/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2495/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2494/comments | https://api.github.com/repos/huggingface/datasets/issues/2494/events | https://github.com/huggingface/datasets/issues/2494 | 920,149,183 | MDU6SXNzdWU5MjAxNDkxODM= | 2,494 | Improve docs on Enhancing performance | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | open | false | null | [] | null | [] | 1,623,658,308,000 | 1,623,658,308,000 | null | MEMBER | null | null | null | In the ["Enhancing performance"](https://huggingface.co/docs/datasets/loading_datasets.html#enhancing-performance) section of docs, add specific use cases:
- How to make datasets the fastest
- How to make datasets take the less RAM
- How to make datasets take the less hard drive mem
cc: @thomwolf
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2494/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2493 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2493/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2493/comments | https://api.github.com/repos/huggingface/datasets/issues/2493/events | https://github.com/huggingface/datasets/pull/2493 | 919,833,281 | MDExOlB1bGxSZXF1ZXN0NjY5MDc4OTcw | 2,493 | add tensorflow-macos support | {
"login": "slayerjain",
"id": 12831254,
"node_id": "MDQ6VXNlcjEyODMxMjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12831254?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slayerjain",
"html_url": "https://github.com/slayerjain",
"followers_url": "https://api.github.com/users/slayerjain/followers",
"following_url": "https://api.github.com/users/slayerjain/following{/other_user}",
"gists_url": "https://api.github.com/users/slayerjain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slayerjain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slayerjain/subscriptions",
"organizations_url": "https://api.github.com/users/slayerjain/orgs",
"repos_url": "https://api.github.com/users/slayerjain/repos",
"events_url": "https://api.github.com/users/slayerjain/events{/privacy}",
"received_events_url": "https://api.github.com/users/slayerjain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@albertvillanova done!"
] | 1,623,601,208,000 | 1,623,747,186,000 | 1,623,747,186,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2493",
"html_url": "https://github.com/huggingface/datasets/pull/2493",
"diff_url": "https://github.com/huggingface/datasets/pull/2493.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2493.patch",
"merged_at": 1623747186000
} | ref - https://github.com/huggingface/datasets/issues/2068 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2493/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2492/comments | https://api.github.com/repos/huggingface/datasets/issues/2492/events | https://github.com/huggingface/datasets/pull/2492 | 919,718,102 | MDExOlB1bGxSZXF1ZXN0NjY4OTkxODk4 | 2,492 | Eduge | {
"login": "enod",
"id": 6023883,
"node_id": "MDQ6VXNlcjYwMjM4ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6023883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enod",
"html_url": "https://github.com/enod",
"followers_url": "https://api.github.com/users/enod/followers",
"following_url": "https://api.github.com/users/enod/following{/other_user}",
"gists_url": "https://api.github.com/users/enod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enod/subscriptions",
"organizations_url": "https://api.github.com/users/enod/orgs",
"repos_url": "https://api.github.com/users/enod/repos",
"events_url": "https://api.github.com/users/enod/events{/privacy}",
"received_events_url": "https://api.github.com/users/enod/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,623,561,059,000 | 1,624,355,344,000 | 1,623,840,106,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2492",
"html_url": "https://github.com/huggingface/datasets/pull/2492",
"diff_url": "https://github.com/huggingface/datasets/pull/2492.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2492.patch",
"merged_at": 1623840106000
} | Hi, awesome folks behind the huggingface!
Here is my PR for the text classification dataset in Mongolian.
Please do let me know in case you have anything to clarify.
Thanks & Regards,
Enod | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2492/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2492/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2491 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2491/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2491/comments | https://api.github.com/repos/huggingface/datasets/issues/2491/events | https://github.com/huggingface/datasets/pull/2491 | 919,714,506 | MDExOlB1bGxSZXF1ZXN0NjY4OTg5MTUw | 2,491 | add eduge classification dataset | {
"login": "enod",
"id": 6023883,
"node_id": "MDQ6VXNlcjYwMjM4ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6023883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enod",
"html_url": "https://github.com/enod",
"followers_url": "https://api.github.com/users/enod/followers",
"following_url": "https://api.github.com/users/enod/following{/other_user}",
"gists_url": "https://api.github.com/users/enod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enod/subscriptions",
"organizations_url": "https://api.github.com/users/enod/orgs",
"repos_url": "https://api.github.com/users/enod/repos",
"events_url": "https://api.github.com/users/enod/events{/privacy}",
"received_events_url": "https://api.github.com/users/enod/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Closing this PR as I'll submit a new one - bug free"
] | 1,623,559,021,000 | 1,623,560,808,000 | 1,623,560,798,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2491",
"html_url": "https://github.com/huggingface/datasets/pull/2491",
"diff_url": "https://github.com/huggingface/datasets/pull/2491.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2491.patch",
"merged_at": null
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2491/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2490 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2490/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2490/comments | https://api.github.com/repos/huggingface/datasets/issues/2490/events | https://github.com/huggingface/datasets/pull/2490 | 919,571,385 | MDExOlB1bGxSZXF1ZXN0NjY4ODc4NDA3 | 2,490 | Allow latest pyarrow version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [
"i need some help with this"
] | 1,623,507,454,000 | 1,625,590,492,000 | 1,623,657,203,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2490",
"html_url": "https://github.com/huggingface/datasets/pull/2490",
"diff_url": "https://github.com/huggingface/datasets/pull/2490.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2490.patch",
"merged_at": 1623657203000
} | Allow latest pyarrow version, once that version 4.0.1 fixes the segfault bug introduced in version 4.0.0.
Close #2489. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2490/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2490/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2489 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2489/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2489/comments | https://api.github.com/repos/huggingface/datasets/issues/2489/events | https://github.com/huggingface/datasets/issues/2489 | 919,569,749 | MDU6SXNzdWU5MTk1Njk3NDk= | 2,489 | Allow latest pyarrow version once segfault bug is fixed | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,623,506,992,000 | 1,623,657,203,000 | 1,623,657,203,000 | MEMBER | null | null | null | As pointed out by @symeneses (see https://github.com/huggingface/datasets/pull/2268#issuecomment-860048613), pyarrow has fixed the segfault bug present in version 4.0.0 (see https://issues.apache.org/jira/browse/ARROW-12568):
- it was fixed on 3 May 2021
- version 4.0.1 was released on 19 May 2021 with the bug fix | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2489/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2488 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2488/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2488/comments | https://api.github.com/repos/huggingface/datasets/issues/2488/events | https://github.com/huggingface/datasets/pull/2488 | 919,500,756 | MDExOlB1bGxSZXF1ZXN0NjY4ODIwNDA1 | 2,488 | Set configurable downloaded datasets path | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [] | 1,623,488,943,000 | 1,623,662,007,000 | 1,623,659,347,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2488",
"html_url": "https://github.com/huggingface/datasets/pull/2488",
"diff_url": "https://github.com/huggingface/datasets/pull/2488.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2488.patch",
"merged_at": 1623659347000
} | Part of #2480. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2488/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2487 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2487/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2487/comments | https://api.github.com/repos/huggingface/datasets/issues/2487/events | https://github.com/huggingface/datasets/pull/2487 | 919,452,407 | MDExOlB1bGxSZXF1ZXN0NjY4Nzc5Mjk0 | 2,487 | Set configurable extracted datasets path | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [
"Let me push a small fix... 😉 ",
"Thanks !"
] | 1,623,476,849,000 | 1,623,663,017,000 | 1,623,661,376,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2487",
"html_url": "https://github.com/huggingface/datasets/pull/2487",
"diff_url": "https://github.com/huggingface/datasets/pull/2487.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2487.patch",
"merged_at": 1623661376000
} | Part of #2480. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2487/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2486 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2486/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2486/comments | https://api.github.com/repos/huggingface/datasets/issues/2486/events | https://github.com/huggingface/datasets/pull/2486 | 919,174,898 | MDExOlB1bGxSZXF1ZXN0NjY4NTI2Njg3 | 2,486 | Add Rico Dataset | {
"login": "ncoop57",
"id": 7613470,
"node_id": "MDQ6VXNlcjc2MTM0NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ncoop57",
"html_url": "https://github.com/ncoop57",
"followers_url": "https://api.github.com/users/ncoop57/followers",
"following_url": "https://api.github.com/users/ncoop57/following{/other_user}",
"gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions",
"organizations_url": "https://api.github.com/users/ncoop57/orgs",
"repos_url": "https://api.github.com/users/ncoop57/repos",
"events_url": "https://api.github.com/users/ncoop57/events{/privacy}",
"received_events_url": "https://api.github.com/users/ncoop57/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! Thanks for adding this dataset :)\r\n\r\nRegarding your questions:\r\n1. We can have them as different configuration of the `rico` dataset\r\n2. Yes please use the path to the image and not open the image directly, so that we can let users open the image one at at time during training if they want to for example. In the future we'll have an Image feature type that will decode the encoded image data on the fly when accessing the examples.\r\n3. Feel free to keep the hierarchies as strings if they don't follow a fixed format\r\n4. You can just return the path\r\n\r\n"
] | 1,623,442,661,000 | 1,631,176,166,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2486",
"html_url": "https://github.com/huggingface/datasets/pull/2486",
"diff_url": "https://github.com/huggingface/datasets/pull/2486.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2486.patch",
"merged_at": null
} | Hi there!
I'm wanting to add the Rico datasets for software engineering type data to y'alls awesome library. However, as I have started coding, I've ran into a few hiccups so I thought it best to open the PR early to get a bit of discussion on how the Rico datasets should be added to the `datasets` lib.
1) There are 7 different datasets under Rico and so I was wondering, should I make a folder for each or should I put them as different configurations of a single dataset?
You can see the datasets available for Rico here: http://interactionmining.org/rico
2) As of right now, I have a semi working version of the first dataset which has pairs of screenshots and hierarchies from android applications. However, these screenshots are very large (1440, 2560, 3) and there are 66,000 of them so I am not able to perform the processing that the `datasets` lib does after downloading and extracting the dataset since I run out of memory very fast. Is there a way to have `datasets` lib not put everything into memory while it is processing the dataset?
2.1) If there is not a way, would it be better to just return the path to the screenshots instead of the actual image?
3) The hierarchies are JSON objects and looking through the documentation of `datasets`, I didn't see any feature that I could use for this type of data. So, for now I just have it being read in as a string, is this okay or should I be doing it differently?
4) One of the Rico datasets is a bunch of animations (GIFs), is there a `datasets` feature that I can put this type of data into or should I just return the path as a string?
I appreciate any and all help I can get for this PR, I think the Rico datasets will be an awesome addition to the library :nerd_face: ! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2486/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2486/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2485 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2485/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2485/comments | https://api.github.com/repos/huggingface/datasets/issues/2485/events | https://github.com/huggingface/datasets/issues/2485 | 919,099,218 | MDU6SXNzdWU5MTkwOTkyMTg= | 2,485 | Implement layered building | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,623,437,665,000 | 1,623,437,665,000 | null | MEMBER | null | null | null | As discussed with @stas00 and @lhoestq (see also here https://github.com/huggingface/datasets/issues/2481#issuecomment-859712190):
> My suggestion for this would be to have this enabled by default.
>
> Plus I don't know if there should be a dedicated issue to that is another functionality. But I propose layered building rather than all at once. That is:
>
> 1. uncompress a handful of files via a generator enough to generate one arrow file
> 2. process arrow file 1
> 3. delete all the files that went in and aren't needed anymore.
>
> rinse and repeat.
>
> 1. This way much less disc space will be required - e.g. on JZ we won't be running into inode limitation, also it'd help with the collaborative hub training project
> 2. The user doesn't need to go and manually clean up all the huge files that were left after pre-processing
> 3. It would already include deleting temp files this issue is talking about
>
> I wonder if the new streaming API would be of help, except here the streaming would be into arrow files as the destination, rather than dataloaders. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2485/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2485/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2484 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2484/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2484/comments | https://api.github.com/repos/huggingface/datasets/issues/2484/events | https://github.com/huggingface/datasets/issues/2484 | 919,092,635 | MDU6SXNzdWU5MTkwOTI2MzU= | 2,484 | Implement loading a dataset builder | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"#self-assign"
] | 1,623,437,242,000 | 1,625,481,957,000 | 1,625,481,957,000 | MEMBER | null | null | null | As discussed with @stas00 and @lhoestq, this would allow things like:
```python
from datasets import load_dataset_builder
dataset_name = "openwebtext"
builder = load_dataset_builder(dataset_name)
print(builder.cache_dir)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2484/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2484/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2483 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2483/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2483/comments | https://api.github.com/repos/huggingface/datasets/issues/2483/events | https://github.com/huggingface/datasets/pull/2483 | 918,871,712 | MDExOlB1bGxSZXF1ZXN0NjY4MjU1Mjg1 | 2,483 | Use gc.collect only when needed to avoid slow downs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I continue thinking that the origin of the issue has to do with tqdm (and not with Arrow): this issue only arises for version 4.50.0 (and later) of tqdm, not for previous versions of tqdm.\r\n\r\nMy guess is that tqdm made a change from version 4.50.0 that does not properly release the iterable. ",
"FR"
] | 1,623,424,170,000 | 1,624,044,306,000 | 1,623,425,496,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2483",
"html_url": "https://github.com/huggingface/datasets/pull/2483",
"diff_url": "https://github.com/huggingface/datasets/pull/2483.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2483.patch",
"merged_at": 1623425495000
} | In https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 we added a call to gc.collect to resolve some issues on windows (see https://github.com/huggingface/datasets/pull/2482)
However calling gc.collect too often causes significant slow downs (the CI run time doubled).
So I just moved the gc.collect call to the exact place where it's actually needed: when post-processing a dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2483/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2482 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2482/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2482/comments | https://api.github.com/repos/huggingface/datasets/issues/2482/events | https://github.com/huggingface/datasets/pull/2482 | 918,846,027 | MDExOlB1bGxSZXF1ZXN0NjY4MjMyMzI5 | 2,482 | Allow to use tqdm>=4.50.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,623,422,961,000 | 1,623,424,311,000 | 1,623,424,310,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2482",
"html_url": "https://github.com/huggingface/datasets/pull/2482",
"diff_url": "https://github.com/huggingface/datasets/pull/2482.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2482.patch",
"merged_at": 1623424310000
} | We used to have permission errors on windows whith the latest versions of tqdm (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/6365/workflows/24f7c960-3176-43a5-9652-7830a23a981e/jobs/39232))
They were due to open arrow files not properly closed by pyarrow.
Since https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 gc.collect is called each time we don't need an arrow file to make sure that the files are closed.
close https://github.com/huggingface/datasets/issues/2471
cc @lewtun | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2482/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2482/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2481 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2481/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2481/comments | https://api.github.com/repos/huggingface/datasets/issues/2481/events | https://github.com/huggingface/datasets/issues/2481 | 918,680,168 | MDU6SXNzdWU5MTg2ODAxNjg= | 2,481 | Delete extracted files to save disk space | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"id": 6836458,
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"title": "1.10",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 29,
"state": "closed",
"created_at": 1623178113000,
"updated_at": 1626881809000,
"due_on": 1628146800000,
"closed_at": 1626881809000
} | [
"My suggestion for this would be to have this enabled by default.\r\n\r\nPlus I don't know if there should be a dedicated issue to that is another functionality. But I propose layered building rather than all at once. That is:\r\n\r\n1. uncompress a handful of files via a generator enough to generate one arrow file\r\n2. process arrow file 1\r\n3. delete all the files that went in and aren't needed anymore.\r\n\r\nrinse and repeat.\r\n\r\n1. This way much less disc space will be required - e.g. on JZ we won't be running into inode limitation, also it'd help with the collaborative hub training project\r\n2. The user doesn't need to go and manually clean up all the huge files that were left after pre-processing\r\n3. It would already include deleting temp files this issue is talking about\r\n\r\nI wonder if the new streaming API would be of help, except here the streaming would be into arrow files as the destination, rather than dataloaders."
] | 1,623,414,112,000 | 1,626,685,698,000 | 1,626,685,698,000 | MEMBER | null | null | null | As discussed with @stas00 and @lhoestq, allowing the deletion of extracted files would save a great amount of disk space to typical user. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2481/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2481/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2480 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2480/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2480/comments | https://api.github.com/repos/huggingface/datasets/issues/2480/events | https://github.com/huggingface/datasets/issues/2480 | 918,678,578 | MDU6SXNzdWU5MTg2Nzg1Nzg= | 2,480 | Set download/extracted paths configurable | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"For example to be able to send uncompressed and temp build files to another volume/partition, so that the user gets the minimal disk usage on their primary setup - and ends up with just the downloaded compressed data + arrow files, but outsourcing the huge files and building to another partition. e.g. on JZ there is a special partition for fast data, but it's also volatile, so only temp files should go there.\r\n\r\nThink of it as `TMPDIR` so we need the equivalent for `datasets`."
] | 1,623,414,024,000 | 1,623,767,029,000 | null | MEMBER | null | null | null | As discussed with @stas00 and @lhoestq, setting these paths configurable may allow to overcome disk space limitation on different partitions/drives.
TODO:
- [x] Set configurable extracted datasets path: #2487
- [x] Set configurable downloaded datasets path: #2488
- [ ] Set configurable "incomplete" datasets path? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2480/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2480/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2479 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2479/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2479/comments | https://api.github.com/repos/huggingface/datasets/issues/2479/events | https://github.com/huggingface/datasets/pull/2479 | 918,672,431 | MDExOlB1bGxSZXF1ZXN0NjY4MDc3NTI4 | 2,479 | ❌ load_datasets ❌ | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,623,413,676,000 | 1,623,422,785,000 | 1,623,422,785,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2479",
"html_url": "https://github.com/huggingface/datasets/pull/2479",
"diff_url": "https://github.com/huggingface/datasets/pull/2479.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2479.patch",
"merged_at": 1623422784000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2479/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2478 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2478/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2478/comments | https://api.github.com/repos/huggingface/datasets/issues/2478/events | https://github.com/huggingface/datasets/issues/2478 | 918,507,510 | MDU6SXNzdWU5MTg1MDc1MTA= | 2,478 | Create release script | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,623,404,282,000 | 1,623,404,282,000 | null | MEMBER | null | null | null | Create a script so that releases can be done automatically (as done in `transformers`). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2478/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2477 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2477/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2477/comments | https://api.github.com/repos/huggingface/datasets/issues/2477/events | https://github.com/huggingface/datasets/pull/2477 | 918,334,431 | MDExOlB1bGxSZXF1ZXN0NjY3NzczMTY0 | 2,477 | Fix docs custom stable version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [
"I see that @lhoestq overlooked this PR with his commit 07e2b05. 😢 \r\n\r\nI'm adding a script so that this issue does not happen again.\r\n",
"For the moment, the script only includes `update_custom_js`, but in a follow-up PR I will include all the required steps to make a package release.",
"I think we just need to clarify the release process in setup.py instead of adding a script that does the replacement",
"@lhoestq I really think we should implement a script that performs the release (instead of doing it manually as it is done now), as it is already the case in `transformers`. I will do it in a next PR.\r\n\r\nFor the moment, this PR includes one of the steps of the release script."
] | 1,623,396,363,000 | 1,623,662,060,000 | 1,623,658,818,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2477",
"html_url": "https://github.com/huggingface/datasets/pull/2477",
"diff_url": "https://github.com/huggingface/datasets/pull/2477.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2477.patch",
"merged_at": 1623658818000
} | Currently docs default version is 1.5.0. This PR fixes this and sets the latest version instead. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2477/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2476 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2476/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2476/comments | https://api.github.com/repos/huggingface/datasets/issues/2476/events | https://github.com/huggingface/datasets/pull/2476 | 917,686,662 | MDExOlB1bGxSZXF1ZXN0NjY3MTg3OTk1 | 2,476 | Add TimeDial | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq,\r\nI've pushed the updated README and tags. Let me know if anything is missing/needs some improvement!\r\n\r\n~PS. I don't know why it's not triggering the build~"
] | 1,623,349,987,000 | 1,627,649,874,000 | 1,627,649,874,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2476",
"html_url": "https://github.com/huggingface/datasets/pull/2476",
"diff_url": "https://github.com/huggingface/datasets/pull/2476.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2476.patch",
"merged_at": 1627649874000
} | Dataset: https://github.com/google-research-datasets/TimeDial
To-Do: Update README.md and add YAML tags | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2476/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2476/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2475/comments | https://api.github.com/repos/huggingface/datasets/issues/2475/events | https://github.com/huggingface/datasets/issues/2475 | 917,650,882 | MDU6SXNzdWU5MTc2NTA4ODI= | 2,475 | Issue in timit_asr database | {
"login": "hrahamim",
"id": 85702107,
"node_id": "MDQ6VXNlcjg1NzAyMTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/85702107?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hrahamim",
"html_url": "https://github.com/hrahamim",
"followers_url": "https://api.github.com/users/hrahamim/followers",
"following_url": "https://api.github.com/users/hrahamim/following{/other_user}",
"gists_url": "https://api.github.com/users/hrahamim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hrahamim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hrahamim/subscriptions",
"organizations_url": "https://api.github.com/users/hrahamim/orgs",
"repos_url": "https://api.github.com/users/hrahamim/repos",
"events_url": "https://api.github.com/users/hrahamim/events{/privacy}",
"received_events_url": "https://api.github.com/users/hrahamim/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"This bug was fixed in #1995. Upgrading datasets to version 1.6 fixes the issue!",
"Indeed was a fixed bug.\r\nWorks on version 1.8\r\nThanks "
] | 1,623,348,329,000 | 1,623,572,030,000 | 1,623,571,993,000 | NONE | null | null | null | ## Describe the bug
I am trying to load the timit_asr dataset however only the first record is shown (duplicated over all the rows).
I am using the next code line
dataset = load_dataset(“timit_asr”, split=“test”).shuffle().select(range(10))
The above code result with the same sentence duplicated ten times.
It also happens when I use the dataset viewer at Streamlit .
## Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset(“timit_asr”, split=“test”).shuffle().select(range(10))
data = dataset.to_pandas()
# Sample code to reproduce the bug
```
## Expected results
table with different row information
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.4.1 (also occur in the latest version)
- Platform: Linux-4.15.0-143-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- Tensorflow version (GPU?): 1.15.3 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2475/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2474 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2474/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2474/comments | https://api.github.com/repos/huggingface/datasets/issues/2474/events | https://github.com/huggingface/datasets/issues/2474 | 917,622,055 | MDU6SXNzdWU5MTc2MjIwNTU= | 2,474 | cache_dir parameter for load_from_disk ? | {
"login": "TaskManager91",
"id": 7063207,
"node_id": "MDQ6VXNlcjcwNjMyMDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7063207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TaskManager91",
"html_url": "https://github.com/TaskManager91",
"followers_url": "https://api.github.com/users/TaskManager91/followers",
"following_url": "https://api.github.com/users/TaskManager91/following{/other_user}",
"gists_url": "https://api.github.com/users/TaskManager91/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TaskManager91/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TaskManager91/subscriptions",
"organizations_url": "https://api.github.com/users/TaskManager91/orgs",
"repos_url": "https://api.github.com/users/TaskManager91/repos",
"events_url": "https://api.github.com/users/TaskManager91/events{/privacy}",
"received_events_url": "https://api.github.com/users/TaskManager91/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! `load_from_disk` doesn't move the data. If you specify a local path to your mounted drive, then the dataset is going to be loaded directly from the arrow file in this directory. The cache files that result from `map` operations are also stored in the same directory by default.\r\n\r\nHowever note than writing data to your google drive actually fills the VM's disk (see https://github.com/huggingface/datasets/issues/643)\r\n\r\nGiven that, I don't think that changing the cache directory changes anything.\r\n\r\nLet me know what you think",
"Thanks for your answer! I am a little surprised since I just want to read the dataset.\r\n\r\nAfter debugging a bit, I noticed that the VM’s disk fills up when the tables (generator) are converted to a list:\r\n\r\nhttps://github.com/huggingface/datasets/blob/5ba149773d23369617563d752aca922081277ec2/src/datasets/table.py#L850\r\n\r\nIf I try to iterate through the table’s generator e.g.: \r\n\r\n`length = sum(1 for x in tables)`\r\n\r\nthe VM’s disk fills up as well.\r\n\r\nI’m running out of Ideas 😄 ",
"Indeed reading the data shouldn't increase the VM's disk. Not sure what google colab does under the hood for that to happen",
"Apparently, Colab uses a local cache of the data files read/written from Google Drive. See:\r\n- https://github.com/googlecolab/colabtools/issues/2087#issuecomment-860818457\r\n- https://github.com/googlecolab/colabtools/issues/1915#issuecomment-804234540\r\n- https://github.com/googlecolab/colabtools/issues/2147#issuecomment-885052636"
] | 1,623,346,776,000 | 1,645,023,301,000 | 1,645,023,300,000 | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _load_from_disk_ function, the data gets cached to the VM's disk:
`
from datasets import load_from_disk
myPreprocessedData = load_from_disk("/content/gdrive/MyDrive/ASR_data/myPreprocessedData")
`
I know that chaching on google drive could slow down learning. But at least it would run.
**Describe the solution you'd like**
Add cache_Dir parameter to the load_from_disk function.
**Describe alternatives you've considered**
It looks like you could write a custom loading script for the load_dataset function. But this seems to be much too complex for my use case. Is there perhaps a template here that uses the load_from_disk function?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2474/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2473/comments | https://api.github.com/repos/huggingface/datasets/issues/2473/events | https://github.com/huggingface/datasets/pull/2473 | 917,538,629 | MDExOlB1bGxSZXF1ZXN0NjY3MDU5MjI5 | 2,473 | Add Disfl-QA | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Sounds great! It'll make things easier for the user while accessing the dataset. I'll make some changes to the current file then.",
"I've updated with the suggested changes. Updated the README, YAML tags as well (not sure of Size category tag as I couldn't pass the path of `dataset_infos.json` for this dataset)\r\n"
] | 1,623,341,880,000 | 1,627,559,779,000 | 1,627,559,778,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2473",
"html_url": "https://github.com/huggingface/datasets/pull/2473",
"diff_url": "https://github.com/huggingface/datasets/pull/2473.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2473.patch",
"merged_at": 1627559778000
} | Dataset: https://github.com/google-research-datasets/disfl-qa
To-Do: Update README.md and add YAML tags | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2473/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2472 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2472/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2472/comments | https://api.github.com/repos/huggingface/datasets/issues/2472/events | https://github.com/huggingface/datasets/issues/2472 | 917,463,821 | MDU6SXNzdWU5MTc0NjM4MjE= | 2,472 | Fix automatic generation of Zenodo DOI | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [
"I have received a reply from Zenodo support:\r\n> We are currently investigating and fixing this issue related to GitHub releases. As soon as we have solved it we will reach back to you.",
"Other repo maintainers had the same problem with Zenodo. \r\n\r\nThere is an open issue on their GitHub repo: zenodo/zenodo#2181",
"I have received the following request from Zenodo support:\r\n> Could you send us the link to the repository as well as the release tag?\r\n\r\nMy reply:\r\n> Sure, here it is:\r\n> - Link to the repository: https://github.com/huggingface/datasets\r\n> - Link to the repository at the release tag: https://github.com/huggingface/datasets/releases/tag/1.8.0\r\n> - Release tag: 1.8.0",
"Zenodo issue has been fixed. The 1.8.0 release DOI can be found here: https://zenodo.org/record/4946100#.YMd6vKj7RPY"
] | 1,623,338,146,000 | 1,623,689,382,000 | 1,623,689,382,000 | MEMBER | null | null | null | After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published".
I have contacted Zenodo support to fix this issue.
TODO:
- [x] Check with Zenodo to fix the issue
- [x] Check BibTeX entry is right | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2472/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2471 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2471/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2471/comments | https://api.github.com/repos/huggingface/datasets/issues/2471/events | https://github.com/huggingface/datasets/issues/2471 | 917,067,165 | MDU6SXNzdWU5MTcwNjcxNjU= | 2,471 | Fix PermissionError on Windows when using tqdm >=4.50.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [] | 1,623,313,909,000 | 1,623,424,310,000 | 1,623,424,310,000 | MEMBER | null | null | null | See: https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111
```
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2471/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2470/comments | https://api.github.com/repos/huggingface/datasets/issues/2470/events | https://github.com/huggingface/datasets/issues/2470 | 916,724,260 | MDU6SXNzdWU5MTY3MjQyNjA= | 2,470 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`. | {
"login": "mbforbes",
"id": 1170062,
"node_id": "MDQ6VXNlcjExNzAwNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1170062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbforbes",
"html_url": "https://github.com/mbforbes",
"followers_url": "https://api.github.com/users/mbforbes/followers",
"following_url": "https://api.github.com/users/mbforbes/following{/other_user}",
"gists_url": "https://api.github.com/users/mbforbes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mbforbes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mbforbes/subscriptions",
"organizations_url": "https://api.github.com/users/mbforbes/orgs",
"repos_url": "https://api.github.com/users/mbforbes/repos",
"events_url": "https://api.github.com/users/mbforbes/events{/privacy}",
"received_events_url": "https://api.github.com/users/mbforbes/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! It looks like the issue comes from pyarrow. What version of pyarrow are you using ? How did you install it ?",
"Thank you for the quick reply! I have `pyarrow==4.0.0`, and I am installing with `pip`. It's not one of my explicit dependencies, so I assume it came along with something else.",
"Could you trying reinstalling pyarrow with pip ?\r\nI'm not sure why it would check in your multicurtural-sc directory for source files.",
"Sure! I tried reinstalling to get latest. pip was mad because it looks like Datasets currently wants <4.0.0 (which is interesting, because apparently I ended up with 4.0.0 already?), but I gave it a shot anyway:\r\n\r\n```bash\r\n$ pip install --upgrade --force-reinstall pyarrow\r\nCollecting pyarrow\r\n Downloading pyarrow-4.0.1-cp39-cp39-manylinux2014_x86_64.whl (21.9 MB)\r\n |████████████████████████████████| 21.9 MB 23.8 MB/s\r\nCollecting numpy>=1.16.6\r\n Using cached numpy-1.20.3-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.4 MB)\r\nInstalling collected packages: numpy, pyarrow\r\n Attempting uninstall: numpy\r\n Found existing installation: numpy 1.20.3\r\n Uninstalling numpy-1.20.3:\r\n Successfully uninstalled numpy-1.20.3\r\n Attempting uninstall: pyarrow\r\n Found existing installation: pyarrow 3.0.0\r\n Uninstalling pyarrow-3.0.0:\r\n Successfully uninstalled pyarrow-3.0.0\r\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\ndatasets 1.8.0 requires pyarrow<4.0.0,>=1.0.0, but you have pyarrow 4.0.1 which is incompatible.\r\nSuccessfully installed numpy-1.20.3 pyarrow-4.0.1\r\n```\r\n\r\nTrying it, the same issue:\r\n\r\n![image](https://user-images.githubusercontent.com/1170062/121730226-3f470b80-caa4-11eb-85a5-684c44c816da.png)\r\n\r\nI tried installing `\"pyarrow<4.0.0\"`, which gave me 3.0.0. Running, still, same issue.\r\n\r\nI agree it's weird that pyarrow is checking the source code directory for its files. (There is no `pyarrow/` directory there.) To me, that makes it seem like an issue with how pyarrow is called.\r\n\r\nOut of curiosity, I tried running this with fewer workers to see when the error arises:\r\n\r\n- 1: ✅\r\n- 2: ✅\r\n- 4: ✅\r\n- 8: ✅\r\n- 10: ✅\r\n- 11: ❌ 🤔\r\n- 12: ❌\r\n- 16: ❌\r\n- 32: ❌\r\n\r\nchecking my datasets:\r\n\r\n```python\r\n>>> datasets\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text'],\r\n num_rows: 389290\r\n })\r\n validation.sc: Dataset({\r\n features: ['text'],\r\n num_rows: 10 # 🤔\r\n })\r\n validation.wvs: Dataset({\r\n features: ['text'],\r\n num_rows: 93928\r\n })\r\n})\r\n```\r\n\r\nNew hypothesis: crash if `num_proc` > length of a dataset? 😅\r\n\r\nIf so, this might be totally my fault, as the caller. Could be a docs fix, or maybe this library could do a check to limit `num_proc` for this case?",
"Good catch ! Not sure why it could raise such a weird issue from pyarrow though\r\nWe should definitely reduce num_proc to the length of the dataset if needed and log a warning.",
"This has been fixed in #2566, thanks @connor-mccarthy !\r\nWe'll make a new release soon that includes the fix ;)"
] | 1,623,278,422,000 | 1,625,132,094,000 | 1,625,130,673,000 | NONE | null | null | null | ## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any tips greatly appreciated, I'm happy to provide more info if it would helps us diagnose.
## Steps to reproduce the bug
```python
# this function will be applied with map()
def tokenize_function(examples):
return tokenizer(
examples["text"],
padding=PaddingStrategy.DO_NOT_PAD,
truncation=True,
)
# data_files is a Dict[str, str] mapping name -> path
datasets = load_dataset("text", data_files={...})
# this is where the error happens if num_proc = 16,
# but is fine if num_proc = 1
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=num_workers,
)
```
## Expected results
The `map()` function succeeds with `num_proc` > 1.
## Actual results
![image](https://user-images.githubusercontent.com/1170062/121404271-a6cc5200-c910-11eb-8e27-5c893bd04042.png)
![image](https://user-images.githubusercontent.com/1170062/121404362-be0b3f80-c910-11eb-9117-658943029aef.png)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2
- Platform: Linux-5.4.0-73-generic-x86_64-with-glibc2.31
- Python version: 3.9.5
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes, but I think N/A for this issue
- Using distributed or parallel set-up in script?: Multi-GPU on one machine, but I think also N/A for this issue
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2470/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2469 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2469/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2469/comments | https://api.github.com/repos/huggingface/datasets/issues/2469/events | https://github.com/huggingface/datasets/pull/2469 | 916,440,418 | MDExOlB1bGxSZXF1ZXN0NjY2MTA1OTk1 | 2,469 | Bump tqdm version | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"i tried both the latest version of `tqdm` and the version required by `autonlp` - no luck with windows 😞 \r\n\r\nit's very weird that a progress bar would trigger these kind of errors, so i'll have a look to see if it's something unique to `datasets`",
"Closing since this is now fixed in #2482 "
] | 1,623,259,480,000 | 1,623,423,822,000 | 1,623,423,816,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2469",
"html_url": "https://github.com/huggingface/datasets/pull/2469",
"diff_url": "https://github.com/huggingface/datasets/pull/2469.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2469.patch",
"merged_at": null
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2469/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2468 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2468/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2468/comments | https://api.github.com/repos/huggingface/datasets/issues/2468/events | https://github.com/huggingface/datasets/pull/2468 | 916,427,320 | MDExOlB1bGxSZXF1ZXN0NjY2MDk0ODI5 | 2,468 | Implement ClassLabel encoding in JSON loader | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [
"No, nevermind @lhoestq. Thanks to you for your reviews!"
] | 1,623,258,534,000 | 1,624,894,794,000 | 1,624,892,735,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2468",
"html_url": "https://github.com/huggingface/datasets/pull/2468",
"diff_url": "https://github.com/huggingface/datasets/pull/2468.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2468.patch",
"merged_at": 1624892734000
} | Close #2365. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2468/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2466 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2466/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2466/comments | https://api.github.com/repos/huggingface/datasets/issues/2466/events | https://github.com/huggingface/datasets/pull/2466 | 915,914,098 | MDExOlB1bGxSZXF1ZXN0NjY1NjY1MjQy | 2,466 | change udpos features structure | {
"login": "jerryIsHere",
"id": 50871412,
"node_id": "MDQ6VXNlcjUwODcxNDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerryIsHere",
"html_url": "https://github.com/jerryIsHere",
"followers_url": "https://api.github.com/users/jerryIsHere/followers",
"following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}",
"gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions",
"organizations_url": "https://api.github.com/users/jerryIsHere/orgs",
"repos_url": "https://api.github.com/users/jerryIsHere/repos",
"events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}",
"received_events_url": "https://api.github.com/users/jerryIsHere/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Let's add the tags in another PR. Thanks again !",
"Close #2061 , close #2444."
] | 1,623,225,811,000 | 1,624,017,309,000 | 1,623,840,097,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2466",
"html_url": "https://github.com/huggingface/datasets/pull/2466",
"diff_url": "https://github.com/huggingface/datasets/pull/2466.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2466.patch",
"merged_at": 1623840097000
} | The structure is change such that each example is a sentence
The change is done for issues:
#2061
#2444
Close #2061 , close #2444. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2466/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2465 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2465/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2465/comments | https://api.github.com/repos/huggingface/datasets/issues/2465/events | https://github.com/huggingface/datasets/pull/2465 | 915,525,071 | MDExOlB1bGxSZXF1ZXN0NjY1MzMxMDMz | 2,465 | adding masahaner dataset | {
"login": "dadelani",
"id": 23586676,
"node_id": "MDQ6VXNlcjIzNTg2Njc2",
"avatar_url": "https://avatars.githubusercontent.com/u/23586676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dadelani",
"html_url": "https://github.com/dadelani",
"followers_url": "https://api.github.com/users/dadelani/followers",
"following_url": "https://api.github.com/users/dadelani/following{/other_user}",
"gists_url": "https://api.github.com/users/dadelani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dadelani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dadelani/subscriptions",
"organizations_url": "https://api.github.com/users/dadelani/orgs",
"repos_url": "https://api.github.com/users/dadelani/repos",
"events_url": "https://api.github.com/users/dadelani/events{/privacy}",
"received_events_url": "https://api.github.com/users/dadelani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thank you for the review. ",
"Thanks a lot for the corrections and comments. \r\n\r\nI have resolved point 2. The make style still throws some errors, please see below\r\n\r\nblack --line-length 119 --target-version py36 tests src benchmarks datasets/**/*.py metrics\r\n/bin/sh: 1: black: not found\r\nMakefile:13: recipe for target 'style' failed\r\nmake: *** [style] Error 127\r\n\r\nCan you help to resolve this?",
"Thank you very much @lhoestq for the help. "
] | 1,623,187,225,000 | 1,623,682,745,000 | 1,623,682,745,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2465",
"html_url": "https://github.com/huggingface/datasets/pull/2465",
"diff_url": "https://github.com/huggingface/datasets/pull/2465.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2465.patch",
"merged_at": 1623682745000
} | Adding Masakhane dataset https://github.com/masakhane-io/masakhane-ner
@lhoestq , can you please review | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2465/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2465/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2464 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2464/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2464/comments | https://api.github.com/repos/huggingface/datasets/issues/2464/events | https://github.com/huggingface/datasets/pull/2464 | 915,485,601 | MDExOlB1bGxSZXF1ZXN0NjY1Mjk1MDE5 | 2,464 | fix: adjusting indexing for the labels. | {
"login": "drugilsberg",
"id": 5406908,
"node_id": "MDQ6VXNlcjU0MDY5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5406908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drugilsberg",
"html_url": "https://github.com/drugilsberg",
"followers_url": "https://api.github.com/users/drugilsberg/followers",
"following_url": "https://api.github.com/users/drugilsberg/following{/other_user}",
"gists_url": "https://api.github.com/users/drugilsberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drugilsberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drugilsberg/subscriptions",
"organizations_url": "https://api.github.com/users/drugilsberg/orgs",
"repos_url": "https://api.github.com/users/drugilsberg/repos",
"events_url": "https://api.github.com/users/drugilsberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/drugilsberg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> Good catch ! Thanks for fixing it\r\n\r\nMy pleasure🙏"
] | 1,623,185,245,000 | 1,623,233,746,000 | 1,623,229,828,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2464",
"html_url": "https://github.com/huggingface/datasets/pull/2464",
"diff_url": "https://github.com/huggingface/datasets/pull/2464.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2464.patch",
"merged_at": 1623229828000
} | The labels index were mismatching the actual ones used in the dataset. Specifically `0` is used for `SUPPORTS` and `1` is used for `REFUTES`
After this change, the `README.md` now reflects the content of `dataset_infos.json`.
Signed-off-by: Matteo Manica <drugilsberg@gmail.com> | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2464/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2463 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2463/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2463/comments | https://api.github.com/repos/huggingface/datasets/issues/2463/events | https://github.com/huggingface/datasets/pull/2463 | 915,454,788 | MDExOlB1bGxSZXF1ZXN0NjY1MjY3NTA2 | 2,463 | Fix proto_qa download link | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,623,183,796,000 | 1,623,329,396,000 | 1,623,313,870,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2463",
"html_url": "https://github.com/huggingface/datasets/pull/2463",
"diff_url": "https://github.com/huggingface/datasets/pull/2463.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2463.patch",
"merged_at": 1623313869000
} | Fixes #2459
Instead of updating the path, this PR fixes a commit hash as suggested by @lhoestq. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2463/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2463/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2462 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2462/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2462/comments | https://api.github.com/repos/huggingface/datasets/issues/2462/events | https://github.com/huggingface/datasets/issues/2462 | 915,384,613 | MDU6SXNzdWU5MTUzODQ2MTM= | 2,462 | Merge DatasetDict and Dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/8",
"html_url": "https://github.com/huggingface/datasets/milestone/8",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels",
"id": 6968069,
"node_id": "MI_kwDODunzps4AalMF",
"number": 8,
"title": "1.12",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 4,
"closed_issues": 2,
"state": "open",
"created_at": 1626881696000,
"updated_at": 1634120793000,
"due_on": 1630306800000,
"closed_at": null
} | [] | 1,623,180,124,000 | 1,630,560,812,000 | null | MEMBER | null | null | null | As discussed in #2424 and #2437 (please see there for detailed conversation):
- It would be desirable to improve UX with respect the confusion between DatasetDict and Dataset.
- The difference between Dataset and DatasetDict is an additional abstraction complexity that confuses "typical" end users.
- A user expects a "Dataset" (whatever it contains multiple or a single split) and maybe it could be interesting to try to simplify the user-facing API as much as possible to hide this complexity from the end user.
Here is a proposal for discussion and refined (and potential abandon if it's not good enough):
- let's consider that a DatasetDict is also a Dataset with the various split concatenated one after the other
- let's disallow the use of integers in split names (probably not a very big breaking change)
- when you index with integers you access the examples progressively in split after the other is finished (in a deterministic order)
- when you index with strings/split name you have the same behavior as now (full backward compat)
- let's then also have all the methods of a Dataset on the DatasetDict
The end goal would be to merge both Dataset and DatasetDict object in a single object that would be (pretty much totally) backward compatible with both.
There are a few things that we could discuss if we want to merge Dataset and DatasetDict:
1. what happens if you index by a string ? Does it return the column or the split ? We could disallow conflicts between column names and split names to avoid ambiguities. It can be surprising to be able to get a column or a split using the same indexing feature
```
from datasets import load_dataset
dataset = load_dataset(...)
dataset["train"]
dataset["input_ids"]
```
2. what happens when you iterate over the object ? I guess it should iterate over the examples as a Dataset object, but a DatasetDict used to iterate over the splits as they are the dictionary keys. This is a breaking change that we can discuss.
Moreover regarding your points:
- integers are not allowed as split names already
- it's definitely doable to have all the methods. Maybe some of them like train_test_split that is currently only available for Dataset can be tweaked to work for a split dataset
cc: @thomwolf @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2462/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2462/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2461 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2461/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2461/comments | https://api.github.com/repos/huggingface/datasets/issues/2461/events | https://github.com/huggingface/datasets/pull/2461 | 915,286,150 | MDExOlB1bGxSZXF1ZXN0NjY1MTE3MTY4 | 2,461 | Support sliced list arrays in cast | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,623,173,927,000 | 1,623,174,984,000 | 1,623,174,983,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2461",
"html_url": "https://github.com/huggingface/datasets/pull/2461",
"diff_url": "https://github.com/huggingface/datasets/pull/2461.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2461.patch",
"merged_at": 1623174983000
} | There is this issue in pyarrow:
```python
import pyarrow as pa
arr = pa.array([[i * 10] for i in range(4)])
arr.cast(pa.list_(pa.int32())) # works
arr = arr.slice(1)
arr.cast(pa.list_(pa.int32())) # fails
# ArrowNotImplementedError("Casting sliced lists (non-zero offset) not yet implemented")
```
However in `Dataset.cast` we slice tables to cast their types (it's memory intensive), so we have the same issue.
Because of this it is currently not possible to cast a Dataset with a Sequence feature type (unless the table is small enough to not be sliced).
In this PR I fixed this by resetting the offset of `pyarrow.ListArray` arrays to zero in the table before casting.
I used `pyarrow.compute.subtract` function to update the offsets of the ListArray.
cc @abhi1thakur @SBrandeis | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2461/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2461/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2460 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2460/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2460/comments | https://api.github.com/repos/huggingface/datasets/issues/2460/events | https://github.com/huggingface/datasets/pull/2460 | 915,268,536 | MDExOlB1bGxSZXF1ZXN0NjY1MTAyMjA4 | 2,460 | Revert default in-memory for small datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/4",
"html_url": "https://github.com/huggingface/datasets/milestone/4",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/4/labels",
"id": 6680642,
"node_id": "MDk6TWlsZXN0b25lNjY4MDY0Mg==",
"number": 4,
"title": "1.8",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 2,
"state": "closed",
"created_at": 1618937356000,
"updated_at": 1623178297000,
"due_on": 1623135600000,
"closed_at": 1623178264000
} | [
"Thank you for this welcome change guys!"
] | 1,623,172,463,000 | 1,623,175,454,000 | 1,623,174,943,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2460",
"html_url": "https://github.com/huggingface/datasets/pull/2460",
"diff_url": "https://github.com/huggingface/datasets/pull/2460.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2460.patch",
"merged_at": 1623174943000
} | Close #2458 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2460/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2460/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2459 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2459/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2459/comments | https://api.github.com/repos/huggingface/datasets/issues/2459/events | https://github.com/huggingface/datasets/issues/2459 | 915,222,015 | MDU6SXNzdWU5MTUyMjIwMTU= | 2,459 | `Proto_qa` hosting seems to be broken | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"@VictorSanh , I think @mariosasko is already working on it. "
] | 1,623,168,992,000 | 1,623,313,869,000 | 1,623,313,869,000 | MEMBER | null | null | null | ## Describe the bug
The hosting (on Github) of the `proto_qa` dataset seems broken. I haven't investigated more yet, just flagging it for now.
@zaidalyafeai if you want to dive into it, I think it's just a matter of changing the links in `proto_qa.py`
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("proto_qa")
```
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/load.py", line 751, in load_dataset
use_auth_token=use_auth_token,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 630, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/hf/.cache/huggingface/modules/datasets_modules/datasets/proto_qa/445346efaad5c5f200ecda4aa7f0fb50ff1b55edde3003be424a2112c3e8102e/proto_qa.py", line 131, in _split_generators
train_fpath = dl_manager.download(_URLs[self.config.name]["train"])
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 199, in download
num_proc=download_config.num_proc,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 195, in map_nested
return function(data_struct)
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 218, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/iesl/protoqa-data/master/data/train/protoqa_train.jsonl
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2459/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2458 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2458/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2458/comments | https://api.github.com/repos/huggingface/datasets/issues/2458/events | https://github.com/huggingface/datasets/issues/2458 | 915,199,693 | MDU6SXNzdWU5MTUxOTk2OTM= | 2,458 | Revert default in-memory for small datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/4",
"html_url": "https://github.com/huggingface/datasets/milestone/4",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/4/labels",
"id": 6680642,
"node_id": "MDk6TWlsZXN0b25lNjY4MDY0Mg==",
"number": 4,
"title": "1.8",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 2,
"state": "closed",
"created_at": 1618937356000,
"updated_at": 1623178297000,
"due_on": 1623135600000,
"closed_at": 1623178264000
} | [
"cc: @krandiash (pinged in reverted PR)."
] | 1,623,167,501,000 | 1,623,178,631,000 | 1,623,174,943,000 | MEMBER | null | null | null | Users are reporting issues and confusion about setting default in-memory to True for small datasets.
We see 2 clear use cases of Datasets:
- the "canonical" way, where you can work with very large datasets, as they are memory-mapped and cached (after every transformation)
- some edge cases (speed benchmarks, interactive/exploratory analysis,...), where default in-memory can explicitly be enabled, and no caching will be done
After discussing with @lhoestq we have agreed to:
- revert this feature (implemented in #2182)
- explain in the docs how to optimize speed/performance by setting default in-memory
cc: @stas00 https://github.com/huggingface/datasets/pull/2409#issuecomment-856210552 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2458/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2458/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2457 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2457/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2457/comments | https://api.github.com/repos/huggingface/datasets/issues/2457/events | https://github.com/huggingface/datasets/pull/2457 | 915,079,441 | MDExOlB1bGxSZXF1ZXN0NjY0OTQwMzQ0 | 2,457 | Add align_labels_with_mapping function | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq i think this is ready for another review 🙂 ",
"@lhoestq thanks for the feedback - it's now integrated :) \r\n\r\ni also added a comment about sorting the input label IDs",
"Created the PR here: https://github.com/huggingface/datasets/pull/2510",
"> Thanks ! Looks all good now :)\r\n> \r\n> We will also need to have the `DatasetDict.align_labels_with_mapping` method. Let me quickly add it\r\n\r\nthanks a lot! i always forget about `DatasetDict` - will be happy when it's just one \"dataset\" object :)",
"So, there seems to be a problem with the function align_labels_with_mapping for models like this: https://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli]. At least with this model, but perhaps also with others, the model.config.label2id values are of type str not int, which crashes said function. After manually converting the model.config.label2id values to int, the script runs smoothly.\r\n\r\n"
] | 1,623,160,440,000 | 1,641,977,861,000 | 1,623,923,812,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2457",
"html_url": "https://github.com/huggingface/datasets/pull/2457",
"diff_url": "https://github.com/huggingface/datasets/pull/2457.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2457.patch",
"merged_at": 1623923812000
} | This PR adds a helper function to align the `label2id` mapping between a `datasets.Dataset` and a classifier (e.g. a transformer with a `PretrainedConfig.label2id` dict), with the alignment performed on the dataset itself.
This will help us with the Hub evaluation, where we won't know in advance whether a model that is fine-tuned on say MNLI has the same mappings as the MNLI dataset we load from `datasets`.
An example where this is needed is if we naively try to evaluate `microsoft/deberta-base-mnli` on `mnli` because the model config has the following mappings:
```python
"id2label": {
"0": "CONTRADICTION",
"1": "NEUTRAL",
"2": "ENTAILMENT"
},
"label2id": {
"CONTRADICTION": 0,
"ENTAILMENT": 2,
"NEUTRAL": 1
}
```
while the `mnli` dataset has the `contradiction` and `neutral` labels swapped:
```python
id2label = {0: 'entailment', 1: 'neutral', 2: 'contradiction'}
label2id = {'contradiction': 2, 'entailment': 0, 'neutral': 1}
```
As a result, we get a much lower accuracy during evaluation:
```python
from datasets import load_dataset
from transformers.trainer_utils import EvalPrediction
from transformers import AutoModelForSequenceClassification, Trainer
# load dataset for evaluation
mnli = load_dataset("glue", "mnli", split="test")
# load model
model_ckpt = "microsoft/deberta-base-mnli"
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
# preprocess, create trainer ...
mnli_enc = ...
trainer = Trainer(model, args=args, tokenizer=tokenizer)
# generate preds
preds = trainer.predict(mnli_enc)
# preds.label_ids misalinged with model.config => returns wrong accuracy (too low)!
compute_metrics(EvalPrediction(preds.predictions, preds.label_ids))
```
The fix is to use the helper function before running the evaluation to make sure the label IDs are aligned:
```python
mnli_enc_aligned = mnli_enc.align_labels_with_mapping(label2id=config.label2id, label_column="label")
# preds now aligned and everyone is happy :)
preds = trainer.predict(mnli_enc_aligned)
```
cc @thomwolf @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2457/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2457/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2456 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2456/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2456/comments | https://api.github.com/repos/huggingface/datasets/issues/2456/events | https://github.com/huggingface/datasets/pull/2456 | 914,709,293 | MDExOlB1bGxSZXF1ZXN0NjY0NjAwOTk1 | 2,456 | Fix cross-reference typos in documentation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,623,145,514,000 | 1,623,174,097,000 | 1,623,174,096,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2456",
"html_url": "https://github.com/huggingface/datasets/pull/2456",
"diff_url": "https://github.com/huggingface/datasets/pull/2456.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2456.patch",
"merged_at": 1623174096000
} | Fix some minor typos in docs that avoid the creation of cross-reference links. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2456/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2455 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2455/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2455/comments | https://api.github.com/repos/huggingface/datasets/issues/2455/events | https://github.com/huggingface/datasets/pull/2455 | 914,177,468 | MDExOlB1bGxSZXF1ZXN0NjY0MTEzNjg2 | 2,455 | Update version in xor_tydi_qa.py | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for updating the version\r\n\r\n> Should I revert to the old dummy/1.0.0 or delete it and keep only dummy/1.1.0?\r\n\r\nFeel free to delete the old dummy data files\r\n"
] | 1,623,119,025,000 | 1,623,684,925,000 | 1,623,684,925,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2455",
"html_url": "https://github.com/huggingface/datasets/pull/2455",
"diff_url": "https://github.com/huggingface/datasets/pull/2455.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2455.patch",
"merged_at": 1623684925000
} | Fix #2449
@lhoestq Should I revert to the old `dummy/1.0.0` or delete it and keep only `dummy/1.1.0`? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2455/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2454 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2454/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2454/comments | https://api.github.com/repos/huggingface/datasets/issues/2454/events | https://github.com/huggingface/datasets/pull/2454 | 913,883,631 | MDExOlB1bGxSZXF1ZXN0NjYzODUyODU1 | 2,454 | Rename config and environment variable for in memory max size | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thank you for the rename, @albertvillanova!"
] | 1,623,093,668,000 | 1,623,098,626,000 | 1,623,098,626,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2454",
"html_url": "https://github.com/huggingface/datasets/pull/2454",
"diff_url": "https://github.com/huggingface/datasets/pull/2454.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2454.patch",
"merged_at": 1623098626000
} | As discussed in #2409, both config and environment variable have been renamed.
cc: @stas00, huggingface/transformers#12056 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2454/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2453 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2453/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2453/comments | https://api.github.com/repos/huggingface/datasets/issues/2453/events | https://github.com/huggingface/datasets/pull/2453 | 913,729,258 | MDExOlB1bGxSZXF1ZXN0NjYzNzE3NTk2 | 2,453 | Keep original features order | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [
"The arrow writer was supposing that the columns were always in the sorted order. I just pushed a fix to reorder the arrays accordingly to the schema. It was failing for many datasets like squad",
"and obviously it broke everything",
"Feel free to revert my commit. I can investigate this in the coming days",
"@lhoestq I do not understand when you say:\r\n> It was failing for many datasets like squad\r\n\r\nAll the tests were green after my last commit.",
"> All the tests were green after my last commit.\r\n\r\nYes but loading the actual squad dataset was failing :/\r\n"
] | 1,623,083,198,000 | 1,623,780,336,000 | 1,623,771,828,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2453",
"html_url": "https://github.com/huggingface/datasets/pull/2453",
"diff_url": "https://github.com/huggingface/datasets/pull/2453.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2453.patch",
"merged_at": 1623771828000
} | When loading a Dataset from a JSON file whose column names are not sorted alphabetically, we should get the same column name order, whether we pass features (in the same order as in the file) or not.
I found this issue while working on #2366. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2453/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2453/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2452 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2452/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2452/comments | https://api.github.com/repos/huggingface/datasets/issues/2452/events | https://github.com/huggingface/datasets/issues/2452 | 913,603,877 | MDU6SXNzdWU5MTM2MDM4Nzc= | 2,452 | MRPC test set differences between torch and tensorflow datasets | {
"login": "FredericOdermatt",
"id": 50372080,
"node_id": "MDQ6VXNlcjUwMzcyMDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/50372080?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FredericOdermatt",
"html_url": "https://github.com/FredericOdermatt",
"followers_url": "https://api.github.com/users/FredericOdermatt/followers",
"following_url": "https://api.github.com/users/FredericOdermatt/following{/other_user}",
"gists_url": "https://api.github.com/users/FredericOdermatt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FredericOdermatt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FredericOdermatt/subscriptions",
"organizations_url": "https://api.github.com/users/FredericOdermatt/orgs",
"repos_url": "https://api.github.com/users/FredericOdermatt/repos",
"events_url": "https://api.github.com/users/FredericOdermatt/events{/privacy}",
"received_events_url": "https://api.github.com/users/FredericOdermatt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Realized that `tensorflow_datasets` is not provided by Huggingface and should therefore raise the issue there."
] | 1,623,075,626,000 | 1,623,076,472,000 | 1,623,076,472,000 | NONE | null | null | null | ## Describe the bug
When using `load_dataset("glue", "mrpc")` to load the MRPC dataset, the test set includes the labels. When using `tensorflow_datasets.load('glue/{}'.format('mrpc'))` to load the dataset the test set does not contain the labels. There should be consistency between torch and tensorflow ways of importing the GLUE datasets.
## Steps to reproduce the bug
Minimal working code
```python
from datasets import load_dataset
import tensorflow as tf
import tensorflow_datasets
# torch
dataset = load_dataset("glue", "mrpc")
# tf
data = tensorflow_datasets.load('glue/{}'.format('mrpc'))
data = list(data['test'].as_numpy_iterator())
for i in range(40,50):
tf_sentence1 = data[i]['sentence1'].decode("utf-8")
tf_sentence2 = data[i]['sentence2'].decode("utf-8")
tf_label = data[i]['label']
index = data[i]['idx']
print('Index {}'.format(index))
torch_sentence1 = dataset['test']['sentence1'][index]
torch_sentence2 = dataset['test']['sentence2'][index]
torch_label = dataset['test']['label'][index]
print('Tensorflow: \n\tSentence1 {}\n\tSentence2 {}\n\tLabel {}'.format(tf_sentence1, tf_sentence2, tf_label))
print('Torch: \n\tSentence1 {}\n\tSentence2 {}\n\tLabel {}'.format(torch_sentence1, torch_sentence2, torch_label))
```
Sample output
```
Index 954
Tensorflow:
Sentence1 Sabri Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate yesterday on charges of violating U.S. arms-control laws .
Sentence2 The elder Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate Wednesday on charges of violating U.S. arms control laws .
Label -1
Torch:
Sentence1 Sabri Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate yesterday on charges of violating U.S. arms-control laws .
Sentence2 The elder Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate Wednesday on charges of violating U.S. arms control laws .
Label 1
Index 711
Tensorflow:
Sentence1 Others keep records sealed for as little as five years or as much as 30 .
Sentence2 Some states make them available immediately ; others keep them sealed for as much as 30 years .
Label -1
Torch:
Sentence1 Others keep records sealed for as little as five years or as much as 30 .
Sentence2 Some states make them available immediately ; others keep them sealed for as much as 30 years .
Label 0
```
## Expected results
I would expect the datasets to be independent of whether I am working with torch or tensorflow.
## Actual results
Test set labels are provided in the `datasets.load_datasets()` for MRPC. However MRPC is the only task where the test set labels are not -1.
## Environment info
- `datasets` version: 1.7.0
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2452/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2451 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2451/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2451/comments | https://api.github.com/repos/huggingface/datasets/issues/2451/events | https://github.com/huggingface/datasets/pull/2451 | 913,263,340 | MDExOlB1bGxSZXF1ZXN0NjYzMzIwNDY1 | 2,451 | Mention that there are no answers in adversarial_qa test set | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,623,053,637,000 | 1,623,054,854,000 | 1,623,054,853,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2451",
"html_url": "https://github.com/huggingface/datasets/pull/2451",
"diff_url": "https://github.com/huggingface/datasets/pull/2451.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2451.patch",
"merged_at": 1623054853000
} | As mention in issue https://github.com/huggingface/datasets/issues/2447, there are no answers in the test set | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2451/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2450 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2450/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2450/comments | https://api.github.com/repos/huggingface/datasets/issues/2450/events | https://github.com/huggingface/datasets/issues/2450 | 912,890,291 | MDU6SXNzdWU5MTI4OTAyOTE= | 2,450 | BLUE file not found | {
"login": "mirfan899",
"id": 3822565,
"node_id": "MDQ6VXNlcjM4MjI1NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3822565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mirfan899",
"html_url": "https://github.com/mirfan899",
"followers_url": "https://api.github.com/users/mirfan899/followers",
"following_url": "https://api.github.com/users/mirfan899/following{/other_user}",
"gists_url": "https://api.github.com/users/mirfan899/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mirfan899/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mirfan899/subscriptions",
"organizations_url": "https://api.github.com/users/mirfan899/orgs",
"repos_url": "https://api.github.com/users/mirfan899/repos",
"events_url": "https://api.github.com/users/mirfan899/events{/privacy}",
"received_events_url": "https://api.github.com/users/mirfan899/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! The `blue` metric doesn't exist, but the `bleu` metric does.\r\nYou can get the full list of metrics [here](https://github.com/huggingface/datasets/tree/master/metrics) or by running\r\n```python\r\nfrom datasets import list_metrics\r\n\r\nprint(list_metrics())\r\n```",
"Ah, my mistake. Thanks for correcting"
] | 1,622,998,914,000 | 1,623,062,775,000 | 1,623,062,775,000 | NONE | null | null | null | Hi, I'm having the following issue when I try to load the `blue` metric.
```shell
import datasets
metric = datasets.load_metric('blue')
Traceback (most recent call last):
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 320, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.7.0/metrics/blue/blue.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 332, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/metrics/blue/blue.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 605, in load_metric
dataset=False,
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 343, in prepare_module
combined_path, github_file_path
FileNotFoundError: Couldn't find file locally at blue/blue.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.7.0/metrics/blue/blue.py.
The file is also not present on the master branch on github.
```
Here is dataset installed version info
```shell
pip freeze | grep datasets
datasets==1.7.0
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2450/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2449 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2449/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2449/comments | https://api.github.com/repos/huggingface/datasets/issues/2449/events | https://github.com/huggingface/datasets/pull/2449 | 912,751,752 | MDExOlB1bGxSZXF1ZXN0NjYyODg1ODUz | 2,449 | Update `xor_tydi_qa` url to v1.1 | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Just noticed while \r\n```load_dataset('local_path/datastes/xor_tydi_qa')``` works,\r\n```load_dataset('xor_tydi_qa')``` \r\noutputs an error: \r\n`\r\nFileNotFoundError: Couldn't find file at https://nlp.cs.washington.edu/xorqa/XORQA_site/data/xor_dev_retrieve_eng_span.jsonl\r\n`\r\n(the old url)\r\n\r\nI tired clearing the cache `.cache/huggingface/modules` and `.cache/huggingface/datasets`, didn't work.\r\n\r\nAnyone know how to fix this? Thanks.",
"It seems like the error is not on your end. By default, the lib tries to download the version of the dataset script that matches the version of the lib, and that version of the script is, in your case, broken because the old URL no longer works. Once this PR gets merged, you can wait for the new release or set `script_version` to `\"master\"` in `load_dataset` to get the fixed version of the script.",
"@mariosasko Thanks! It works now.\r\n\r\nPasting the docstring here for reference.\r\n```\r\n script_version (:class:`~utils.Version` or :obj:`str`, optional): Version of the dataset script to load:\r\n\r\n - For canonical datasets in the `huggingface/datasets` library like \"squad\", the default version of the module is the local version fo the lib.\r\n You can specify a different version from your local version of the lib (e.g. \"master\" or \"1.2.0\") but it might cause compatibility issues.\r\n - For community provided datasets like \"lhoestq/squad\" that have their own git repository on the Datasets Hub, the default version \"main\" corresponds to the \"main\" branch.\r\n You can specify a different version that the default \"main\" by using a commit sha or a git tag of the dataset repository.\r\n```\r\nBranch name didn't work, but commit sha works.",
"Regarding the issue you mentioned about the `--ignore_verifications` flag, I think we should actually change the current behavior of the `--save_infos` flag to make it ignore the verifications as well, so that you don't need to specific `--ignore_verifications` in this case.",
"@lhoestq I realized I forgot to change this:\r\n\r\nhttps://github.com/huggingface/datasets/blob/fdbf5a97d3393f4a91e4cddcabe364029508f7ce/datasets/xor_tydi_qa/xor_tydi_qa.py#L72-L73\r\n\r\nWhat should I do?",
"Oh indeed. Please open a PR to change this. This should be 1.1.0"
] | 1,622,972,698,000 | 1,623,078,981,000 | 1,623,054,664,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2449",
"html_url": "https://github.com/huggingface/datasets/pull/2449",
"diff_url": "https://github.com/huggingface/datasets/pull/2449.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2449.patch",
"merged_at": 1623054663000
} | The dataset is updated and the old url no longer works. So I updated it.
I faced a bug while trying to fix this. Documenting the solution here. Maybe we can add it to the doc (`CONTRIBUTING.md` and `ADD_NEW_DATASET.md`).
> And to make the command work without the ExpectedMoreDownloadedFiles error, you just need to use the --ignore_verifications flag.
https://github.com/huggingface/datasets/issues/2076#issuecomment-803904366 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2449/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2448 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2448/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2448/comments | https://api.github.com/repos/huggingface/datasets/issues/2448/events | https://github.com/huggingface/datasets/pull/2448 | 912,360,109 | MDExOlB1bGxSZXF1ZXN0NjYyNTI2NjA3 | 2,448 | Fix flores download link | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,622,914,224,000 | 1,623,182,578,000 | 1,623,053,905,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2448",
"html_url": "https://github.com/huggingface/datasets/pull/2448",
"diff_url": "https://github.com/huggingface/datasets/pull/2448.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2448.patch",
"merged_at": 1623053905000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2448/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2447 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2447/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2447/comments | https://api.github.com/repos/huggingface/datasets/issues/2447/events | https://github.com/huggingface/datasets/issues/2447 | 912,299,527 | MDU6SXNzdWU5MTIyOTk1Mjc= | 2,447 | dataset adversarial_qa has no answers in the "test" set | {
"login": "bjascob",
"id": 22728060,
"node_id": "MDQ6VXNlcjIyNzI4MDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/22728060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bjascob",
"html_url": "https://github.com/bjascob",
"followers_url": "https://api.github.com/users/bjascob/followers",
"following_url": "https://api.github.com/users/bjascob/following{/other_user}",
"gists_url": "https://api.github.com/users/bjascob/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bjascob/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bjascob/subscriptions",
"organizations_url": "https://api.github.com/users/bjascob/orgs",
"repos_url": "https://api.github.com/users/bjascob/repos",
"events_url": "https://api.github.com/users/bjascob/events{/privacy}",
"received_events_url": "https://api.github.com/users/bjascob/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! I'm pretty sure that the answers are not made available for the test set on purpose because it is part of the DynaBench benchmark, for which you can submit your predictions on the website.\r\nIn any case we should mention this in the dataset card of this dataset.",
"Makes sense, but not intuitive for someone searching through the datasets. Thanks for adding the note to clarify."
] | 1,622,905,058,000 | 1,623,064,387,000 | 1,623,064,387,000 | NONE | null | null | null | ## Describe the bug
When loading the adversarial_qa dataset the 'test' portion has no answers. Only the 'train' and 'validation' portions do. This occurs with all four of the configs ('adversarialQA', 'dbidaf', 'dbert', 'droberta')
## Steps to reproduce the bug
```
from datasets import load_dataset
examples = load_dataset('adversarial_qa', 'adversarialQA', script_version="master")['test']
print('Loaded {:,} examples'.format(len(examples)))
has_answers = 0
for e in examples:
if e['answers']['text']:
has_answers += 1
print('{:,} have answers'.format(has_answers))
>>> Loaded 3,000 examples
>>> 0 have answers
examples = load_dataset('adversarial_qa', 'adversarialQA', script_version="master")['validation']
<...code above...>
>>> Loaded 3,000 examples
>>> 3,000 have answers
```
## Expected results
If 'test' is a valid dataset, it should have answers. Also note that all of the 'train' and 'validation' sets have answers, there are no "no answer" questions with this set (not sure if this is correct or not).
## Environment info
- `datasets` version: 1.7.0
- Platform: Linux-5.8.0-53-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyArrow version: 1.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2447/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2446 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2446/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2446/comments | https://api.github.com/repos/huggingface/datasets/issues/2446/events | https://github.com/huggingface/datasets/issues/2446 | 911,635,399 | MDU6SXNzdWU5MTE2MzUzOTk= | 2,446 | `yelp_polarity` is broken | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"```\r\nFile \"/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/script_runner.py\", line 332, in _run_script\r\n exec(code, module.__dict__)\r\nFile \"/home/sasha/nlp-viewer/run.py\", line 233, in <module>\r\n configs = get_confs(option)\r\nFile \"/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py\", line 604, in wrapped_func\r\n return get_or_create_cached_value()\r\nFile \"/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py\", line 588, in get_or_create_cached_value\r\n return_value = func(*args, **kwargs)\r\nFile \"/home/sasha/nlp-viewer/run.py\", line 148, in get_confs\r\n builder_cls = nlp.load.import_main_class(module_path[0], dataset=True)\r\nFile \"/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/datasets/load.py\", line 85, in import_main_class\r\n module = importlib.import_module(module_path)\r\nFile \"/usr/lib/python3.7/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\nFile \"<frozen importlib._bootstrap>\", line 1006, in _gcd_import\r\nFile \"<frozen importlib._bootstrap>\", line 983, in _find_and_load\r\nFile \"<frozen importlib._bootstrap>\", line 967, in _find_and_load_unlocked\r\nFile \"<frozen importlib._bootstrap>\", line 677, in _load_unlocked\r\nFile \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\r\nFile \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\nFile \"/home/sasha/.cache/huggingface/modules/datasets_modules/datasets/yelp_polarity/a770787b2526bdcbfc29ac2d9beb8e820fbc15a03afd3ebc4fb9d8529de57544/yelp_polarity.py\", line 36, in <module>\r\n from datasets.tasks import TextClassification\r\n```",
"Solved by updating the `nlpviewer`"
] | 1,622,821,469,000 | 1,622,833,007,000 | 1,622,833,007,000 | MEMBER | null | null | null | ![image](https://user-images.githubusercontent.com/22514219/120828150-c4a35b00-c58e-11eb-8083-a537cee4dbb3.png)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2446/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2445 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2445/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2445/comments | https://api.github.com/repos/huggingface/datasets/issues/2445/events | https://github.com/huggingface/datasets/pull/2445 | 911,577,578 | MDExOlB1bGxSZXF1ZXN0NjYxODMzMTky | 2,445 | Fix broken URLs for bn_hate_speech and covid_tweets_japanese | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks ! To fix the CI you just have to rename the dummy data file in the dummy_data.zip files",
"thanks for the tip with the dummy data - all fixed now!"
] | 1,622,818,415,000 | 1,622,828,386,000 | 1,622,828,385,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2445",
"html_url": "https://github.com/huggingface/datasets/pull/2445",
"diff_url": "https://github.com/huggingface/datasets/pull/2445.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2445.patch",
"merged_at": 1622828385000
} | Closes #2388 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2445/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2444 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2444/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2444/comments | https://api.github.com/repos/huggingface/datasets/issues/2444/events | https://github.com/huggingface/datasets/issues/2444 | 911,297,139 | MDU6SXNzdWU5MTEyOTcxMzk= | 2,444 | Sentence Boundaries missing in Dataset: xtreme / udpos | {
"login": "jerryIsHere",
"id": 50871412,
"node_id": "MDQ6VXNlcjUwODcxNDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerryIsHere",
"html_url": "https://github.com/jerryIsHere",
"followers_url": "https://api.github.com/users/jerryIsHere/followers",
"following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}",
"gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions",
"organizations_url": "https://api.github.com/users/jerryIsHere/orgs",
"repos_url": "https://api.github.com/users/jerryIsHere/repos",
"events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}",
"received_events_url": "https://api.github.com/users/jerryIsHere/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi,\r\n\r\nThis is a known issue. More info on this issue can be found in #2061. If you are looking for an open-source contribution, there are step-by-step instructions in the linked issue that you can follow to fix it.",
"Closed by #2466."
] | 1,622,797,826,000 | 1,624,017,223,000 | 1,624,017,223,000 | CONTRIBUTOR | null | null | null | I was browsing through annotation guidelines, as suggested by the datasets introduction.
The guidlines saids "There must be exactly one blank line after every sentence, including the last sentence in the file. Empty sentences are not allowed." in the [Sentence Boundaries and Comments section](https://universaldependencies.org/format.html#sentence-boundaries-and-comments)
But the sentence boundaries seems not to be represented by huggingface datasets features well. I found out that multiple sentence are concatenated together as a 1D array, without any delimiter.
PAN-x, which is another token classification subset from xtreme do represent the sentence boundary using a 2D array.
You may compare in PAN-x.en and udpos.English in the explorer:
https://huggingface.co/datasets/viewer/?dataset=xtreme | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2444/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2443 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2443/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2443/comments | https://api.github.com/repos/huggingface/datasets/issues/2443/events | https://github.com/huggingface/datasets/issues/2443 | 909,983,574 | MDU6SXNzdWU5MDk5ODM1NzQ= | 2,443 | Some tests hang on Windows | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! That would be nice indeed to at least have a warning, since we don't handle the max path length limit.\r\nAlso if we could have an error instead of an infinite loop I'm sure windows users will appreciate that",
"Unfortunately, I know this problem very well... 😅 \r\n\r\nI remember having proposed to throw an error instead of hanging in an infinite loop #2220: 60c7d1b6b71469599a27147a08100f594e7a3f84, 8c8ab60018b00463edf1eca500e434ff061546fc \r\nbut @lhoestq told me:\r\n> Note that the filelock module comes from this project that hasn't changed in years - while still being used by ten of thousands of projects:\r\nhttps://github.com/benediktschmitt/py-filelock\r\n> \r\n> Unless we have proper tests for this, I wouldn't recommend to change it\r\n\r\nI opened an Issue requesting a warning/error at startup for that case: #2224",
"@albertvillanova Thanks for additional info on this issue.\r\n\r\nYes, I think the best option is to throw an error instead of suppressing it in a loop. I've considered 2 more options, but I don't really like them:\r\n1. create a temporary file with a filename longer than 255 characters on import; if this fails, long paths are not enabled and raise a warning. I'm not sure about this approach because I don't like the idea of creating a temporary file on import for this purpose.\r\n2. check if long paths are enabled with [this code](https://stackoverflow.com/a/46546731/14095927). As mentioned in the comment, this code relies on an undocumented function and Win10-specific."
] | 1,622,680,050,000 | 1,624,870,059,000 | 1,624,870,059,000 | CONTRIBUTOR | null | null | null | Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to address these issues/PRs. IMO throwing an error is too harsh, but maybe we can emit a warning in the top-level `__init__.py ` on startup if long paths are not enabled.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2443/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2442 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2442/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2442/comments | https://api.github.com/repos/huggingface/datasets/issues/2442/events | https://github.com/huggingface/datasets/pull/2442 | 909,677,029 | MDExOlB1bGxSZXF1ZXN0NjYwMjE1ODY1 | 2,442 | add english language tags for ~100 datasets | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Fixing the tags of all the datasets is out of scope for this PR so I'm merging even though the CI fails because of the missing tags"
] | 1,622,651,096,000 | 1,622,800,300,000 | 1,622,800,299,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2442",
"html_url": "https://github.com/huggingface/datasets/pull/2442",
"diff_url": "https://github.com/huggingface/datasets/pull/2442.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2442.patch",
"merged_at": 1622800299000
} | As discussed on Slack, I have manually checked for ~100 datasets that they have at least one subset in English. This information was missing so adding into the READMEs.
Note that I didn't check all the subsets so it's possible that some of the datasets have subsets in other languages than English... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2442/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2441 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2441/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2441/comments | https://api.github.com/repos/huggingface/datasets/issues/2441/events | https://github.com/huggingface/datasets/issues/2441 | 908,554,713 | MDU6SXNzdWU5MDg1NTQ3MTM= | 2,441 | DuplicatedKeysError on personal dataset | {
"login": "lucaguarro",
"id": 22605313,
"node_id": "MDQ6VXNlcjIyNjA1MzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/22605313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucaguarro",
"html_url": "https://github.com/lucaguarro",
"followers_url": "https://api.github.com/users/lucaguarro/followers",
"following_url": "https://api.github.com/users/lucaguarro/following{/other_user}",
"gists_url": "https://api.github.com/users/lucaguarro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucaguarro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucaguarro/subscriptions",
"organizations_url": "https://api.github.com/users/lucaguarro/orgs",
"repos_url": "https://api.github.com/users/lucaguarro/repos",
"events_url": "https://api.github.com/users/lucaguarro/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucaguarro/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! In your dataset script you must be yielding examples like\r\n```python\r\nfor line in file:\r\n ...\r\n yield key, {...}\r\n```\r\n\r\nSince `datasets` 1.7.0 we enforce the keys to be unique.\r\nHowever it looks like your examples generator creates duplicate keys: at least two examples have key 0.\r\n\r\nYou can fix that by making sure that your keys are unique.\r\n\r\nFor example if you use a counter to define the key of each example, make sure that your counter is not reset to 0 in during examples generation (between two open files for examples).\r\n\r\nLet me know if you have other questions :)",
"Yup, I indeed was generating duplicate keys. Fixed it and now it's working."
] | 1,622,570,381,000 | 1,622,850,603,000 | 1,622,850,603,000 | NONE | null | null | null | ## Describe the bug
Ever since today, I have been getting a DuplicatedKeysError while trying to load my dataset from my own script.
Error returned when running this line: `dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')`
Note that my script was working fine with earlier versions of the Datasets library. Cannot say with 100% certainty if I have been doing something wrong with my dataset script this whole time or if this is simply a bug with the new version of datasets.
## Steps to reproduce the bug
I cannot provide code to reproduce the error as I am working with my own dataset. I can however provide my script if requested.
## Expected results
For my data to be loaded.
## Actual results
**DuplicatedKeysError** exception is raised
```
Downloading and preparing dataset good_reads_practice_dataset/main_domain (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/good_reads_practice_dataset/main_domain/1.1.0/64ff7c3fee2693afdddea75002eb6887d4fedc3d812ae3622128c8504ab21655...
---------------------------------------------------------------------------
DuplicatedKeysError Traceback (most recent call last)
<ipython-input-6-c342ea0dae9d> in <module>()
----> 1 dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')
5 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, **config_kwargs)
749 try_from_hf_gcs=try_from_hf_gcs,
750 base_path=base_path,
--> 751 use_auth_token=use_auth_token,
752 )
753
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
573 if not downloaded_from_gcs:
574 self._download_and_prepare(
--> 575 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
576 )
577 # Sync info
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
650 try:
651 # Prepare split will record examples associated to the split
--> 652 self._prepare_split(split_generator, **prepare_split_kwargs)
653 except OSError as e:
654 raise OSError(
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator)
990 writer.write(example, key)
991 finally:
--> 992 num_examples, num_bytes = writer.finalize()
993
994 split_generator.split_info.num_examples = num_examples
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in finalize(self, close_stream)
407 # In case current_examples < writer_batch_size, but user uses finalize()
408 if self._check_duplicates:
--> 409 self.check_duplicate_keys()
410 # Re-intializing to empty list for next batch
411 self.hkey_record = []
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self)
347 for hash, key in self.hkey_record:
348 if hash in tmp_record:
--> 349 raise DuplicatedKeysError(key)
350 else:
351 tmp_record.add(hash)
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 0
Keys should be unique and deterministic in nature
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.7.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.9
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2441/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2440 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2440/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2440/comments | https://api.github.com/repos/huggingface/datasets/issues/2440/events | https://github.com/huggingface/datasets/issues/2440 | 908,521,954 | MDU6SXNzdWU5MDg1MjE5NTQ= | 2,440 | Remove `extended` field from dataset tagger | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"The tagger also doesn't insert the value for the `size_categories` field automatically, so this should be fixed too",
"Thanks for reporting. Indeed the `extended` tag doesn't exist. Not sure why we had that in the tagger.\r\nThe repo of the tagger is here if someone wants to give this a try: https://github.com/huggingface/datasets-tagging\r\nOtherwise I can probably fix it next week",
"I've opened a PR on `datasets-tagging` to fix the issue 🚀 ",
"thanks ! this is fixed now"
] | 1,622,567,922,000 | 1,623,229,591,000 | 1,623,229,590,000 | MEMBER | null | null | null | ## Describe the bug
While working on #2435 I used the [dataset tagger](https://huggingface.co/datasets/tagging/) to generate the missing tags for the YAML metadata of each README.md file. However, it seems that our CI raises an error when the `extended` field is included:
```
dataset_name = 'arcd'
@pytest.mark.parametrize("dataset_name", get_changed_datasets(repo_path))
def test_changed_dataset_card(dataset_name):
card_path = repo_path / "datasets" / dataset_name / "README.md"
assert card_path.exists()
error_messages = []
try:
ReadMe.from_readme(card_path)
except Exception as readme_error:
error_messages.append(f"The following issues have been found in the dataset cards:\nREADME:\n{readme_error}")
try:
DatasetMetadata.from_readme(card_path)
except Exception as metadata_error:
error_messages.append(
f"The following issues have been found in the dataset cards:\nYAML tags:\n{metadata_error}"
)
if error_messages:
> raise ValueError("\n".join(error_messages))
E ValueError: The following issues have been found in the dataset cards:
E YAML tags:
E __init__() got an unexpected keyword argument 'extended'
tests/test_dataset_cards.py:70: ValueError
```
Consider either removing this tag from the tagger or including it as part of the validation step in the CI.
cc @yjernite | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2440/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/2440/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2439/comments | https://api.github.com/repos/huggingface/datasets/issues/2439/events | https://github.com/huggingface/datasets/pull/2439 | 908,511,983 | MDExOlB1bGxSZXF1ZXN0NjU5MTkzMDE3 | 2,439 | Better error message when trying to access elements of a DatasetDict without specifying the split | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,622,567,072,000 | 1,623,773,003,000 | 1,623,056,075,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2439",
"html_url": "https://github.com/huggingface/datasets/pull/2439",
"diff_url": "https://github.com/huggingface/datasets/pull/2439.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2439.patch",
"merged_at": 1623056075000
} | As mentioned in #2437 it'd be nice to to have an indication to the users when they try to access an element of a DatasetDict without specifying the split name.
cc @thomwolf | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2439/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2438 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2438/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2438/comments | https://api.github.com/repos/huggingface/datasets/issues/2438/events | https://github.com/huggingface/datasets/pull/2438 | 908,461,914 | MDExOlB1bGxSZXF1ZXN0NjU5MTQ5Njg0 | 2,438 | Fix NQ features loading: reorder fields of features to match nested fields order in arrow data | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,622,563,770,000 | 1,622,797,351,000 | 1,622,797,351,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2438",
"html_url": "https://github.com/huggingface/datasets/pull/2438",
"diff_url": "https://github.com/huggingface/datasets/pull/2438.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2438.patch",
"merged_at": 1622797350000
} | As mentioned in #2401, there is an issue when loading the features of `natural_questions` since the order of the nested fields in the features don't match. The order is important since it matters for the underlying arrow schema.
To fix that I re-order the features based on the arrow schema:
```python
inferred_features = Features.from_arrow_schema(arrow_table.schema)
self.info.features = self.info.features.reorder_fields_as(inferred_features)
assert self.info.features.type == inferred_features.type
```
The re-ordering is a recursive function. It takes into account that the `Sequence` feature type is a struct of list and not a list of struct.
Now it's possible to load `natural_questions` again :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2438/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2438/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2437 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2437/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2437/comments | https://api.github.com/repos/huggingface/datasets/issues/2437/events | https://github.com/huggingface/datasets/pull/2437 | 908,108,882 | MDExOlB1bGxSZXF1ZXN0NjU4ODUwNTkw | 2,437 | Better error message when using the wrong load_from_disk | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"We also have other cases where people are lost between Dataset and DatasetDict, maybe let's gather and solve them all here?\r\n\r\nFor instance, I remember that some people thought they would request a single element of a split but are calling this on a DatasetDict. Maybe here also a better error message when the split requested in not in the dict? pointing to the list of split and the fact that this is a datasetdict containing several datasets?",
"Good idea, let me add a better error message for this case too",
"As a digression from the topic of this PR, IMHO I think that the difference between Dataset and DatasetDict is an additional abstraction complexity that confuses \"typical\" end users. I think a user expects a \"Dataset\" (whatever it contains multiple or a single split) and maybe it could be interesting to try to simplify the user-facing API as much as possible to hide this complexity from the end user.\r\n\r\nI don't know your opinion about this, but it might be worth discussing...\r\n\r\nFor example, I really like the line of the solution of using the function `load_from_disk`, which hides the previous mentioned complexity and handles under the hood whether Dataset/DatasetDict instances should be created...",
"I totally agree, I just haven't found a solution that doesn't imply major breaking changes x)",
"Yes I would also like to find a better solution. Do we have any solution actually? (even implying breaking changes)\r\n\r\nHere is a proposal for discussion and refined (and potential abandon if it's not good enough):\r\n- let's consider that a DatasetDict is also a Dataset with the various split concatenated one after the other\r\n- let's disallow the use of integers in split names (probably not a very big breaking change)\r\n- when you index with integers you access the examples progressively in split after the other is finished (in a deterministic order)\r\n- when you index with strings/split name you have the same behavior as now (full backward compat)\r\n- let's then also have all the methods of a Dataset on the DatasetDict",
"The end goal would be to merge both `Dataset` and `DatasetDict` object in a single object that would be (pretty much totally) backward compatible with both.",
"I like the direction :) I think it can make sense to concatenate them.\r\n\r\nThere are a few things that I we could discuss if we want to merge Dataset and DatasetDict:\r\n1. what happens if you index by a string ? Does it return the column or the split ? We could disallow conflicts between column names and split names to avoid ambiguities. It can be surprising to be able to get a column or a split using the same indexing feature\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(...)\r\ndataset[\"train\"]\r\ndataset[\"input_ids\"]\r\n```\r\n2. what happens when you iterate over the object ? I guess it should iterate over the examples as a Dataset object, but a DatasetDict used to iterate over the splits as they are the dictionary keys. This is a breaking change that we can discuss.\r\n\r\nMoreover regarding your points:\r\n- integers are not allowed as split names already\r\n- it's definitely doable to have all the methods. Maybe some of them like `train_test_split` that is currently only available for Dataset can be tweaked to work for a split dataset",
"Instead of suggesting the use of `Dataset.load_from_disk` and `DatasetDict.load_from_disk`, the error message now suggests to use `datasets.load_from_disk` directly",
"Merging the error message improvement, feel free to continue the discussion here or in a github issue"
] | 1,622,540,602,000 | 1,623,175,430,000 | 1,623,175,430,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2437",
"html_url": "https://github.com/huggingface/datasets/pull/2437",
"diff_url": "https://github.com/huggingface/datasets/pull/2437.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2437.patch",
"merged_at": 1623175429000
} | As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2437/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2436 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2436/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2436/comments | https://api.github.com/repos/huggingface/datasets/issues/2436/events | https://github.com/huggingface/datasets/pull/2436 | 908,100,211 | MDExOlB1bGxSZXF1ZXN0NjU4ODQzMzQy | 2,436 | Update DatasetMetadata and ReadMe | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,622,539,957,000 | 1,623,677,007,000 | 1,623,677,006,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2436",
"html_url": "https://github.com/huggingface/datasets/pull/2436",
"diff_url": "https://github.com/huggingface/datasets/pull/2436.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2436.patch",
"merged_at": 1623677006000
} | This PR contains the changes discussed in #2395.
**Edit**:
In addition to those changes, I'll be updating the `ReadMe` as follows:
Currently, `Section` has separate parsing and validation error lists. In `.validate()`, we add these lists to the final lists and throw errors.
One way to make `ReadMe` consistent with `DatasetMetadata` and add a separate `.validate()` method is to throw separate parsing and validation errors.
This way, we don't have to throw validation errors, but only parsing errors in `__init__ ()`. We can have an option in `__init__()` to suppress parsing errors so that an object is created for validation. Doing this will allow the user to get all the errors in one go.
In `test_dataset_cards` , we are already catching error messages and appending to a list. This can be done for `ReadMe()` for parsing errors, and `ReadMe(...,suppress_errors=True); readme.validate()` for validation, separately.
**Edit 2**:
The only parsing issue we have as of now is multiple headings at the same level with the same name. I assume this will happen very rarely, but it is still better to throw an error than silently pick one of them. It should be okay to separate it this way.
Wdyt @lhoestq ?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2436/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2435 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2435/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2435/comments | https://api.github.com/repos/huggingface/datasets/issues/2435/events | https://github.com/huggingface/datasets/pull/2435 | 907,505,531 | MDExOlB1bGxSZXF1ZXN0NjU4MzQzNDE2 | 2,435 | Insert Extractive QA templates for SQuAD-like datasets | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"hi @lhoestq @SBrandeis i've now added the missing YAML tags, so this PR should be good to go :)",
"urgh, the windows tests are failing because of encoding issues 😢 \r\n\r\n```\r\ndataset_name = 'squad_kor_v1'\r\n\r\n @pytest.mark.parametrize(\"dataset_name\", get_changed_datasets(repo_path))\r\n def test_changed_dataset_card(dataset_name):\r\n card_path = repo_path / \"datasets\" / dataset_name / \"README.md\"\r\n assert card_path.exists()\r\n error_messages = []\r\n try:\r\n ReadMe.from_readme(card_path)\r\n except Exception as readme_error:\r\n error_messages.append(f\"The following issues have been found in the dataset cards:\\nREADME:\\n{readme_error}\")\r\n try:\r\n DatasetMetadata.from_readme(card_path)\r\n except Exception as metadata_error:\r\n error_messages.append(\r\n f\"The following issues have been found in the dataset cards:\\nYAML tags:\\n{metadata_error}\"\r\n )\r\n \r\n if error_messages:\r\n> raise ValueError(\"\\n\".join(error_messages))\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE README:\r\nE 'charmap' codec can't decode byte 0x90 in position 2283: character maps to <undefined>\r\nE The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE 'charmap' codec can't decode byte 0x90 in position 2283: character maps to <undefined>\r\n```",
"Seems like the encoding issues on windows is also being tackled in #2418 - will see if this solves the problem in the current PR"
] | 1,622,470,151,000 | 1,622,730,870,000 | 1,622,730,747,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2435",
"html_url": "https://github.com/huggingface/datasets/pull/2435",
"diff_url": "https://github.com/huggingface/datasets/pull/2435.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2435.patch",
"merged_at": 1622730747000
} | This PR adds task templates for 9 SQuAD-like templates with the following properties:
* 1 config
* A schema that matches the `squad` one (i.e. same column names, especially for the nested `answers` column because the current implementation does not support casting with mismatched columns. see #2434)
* Less than 20GB (my laptop can't handle more right now)
The aim of this PR is to provide a few datasets to experiment with the task template integration in other libraries / services.
PR #2429 should be merged before this one.
cc @abhi1thakur | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2435/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2435/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2434 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2434/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2434/comments | https://api.github.com/repos/huggingface/datasets/issues/2434/events | https://github.com/huggingface/datasets/issues/2434 | 907,503,557 | MDU6SXNzdWU5MDc1MDM1NTc= | 2,434 | Extend QuestionAnsweringExtractive template to handle nested columns | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"this is also the case for the following datasets and configurations:\r\n\r\n* `mlqa` with config `mlqa-translate-train.ar`\r\n\r\n"
] | 1,622,470,011,000 | 1,623,918,090,000 | null | MEMBER | null | null | null | Currently the `QuestionAnsweringExtractive` task template and `preprare_for_task` only support "flat" features. We should extend the functionality to cover QA datasets like:
* `iapp_wiki_qa_squad`
* `parsinlu_reading_comprehension`
where the nested features differ with those from `squad` and trigger an `ArrowNotImplementedError`:
```
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
<ipython-input-12-50e5b8f69c20> in <module>
----> 1 ds.prepare_for_task("question-answering-extractive")[0]
~/git/datasets/src/datasets/arrow_dataset.py in prepare_for_task(self, task)
1436 # We found a template so now flush `DatasetInfo` to skip the template update in `DatasetInfo.__post_init__`
1437 dataset.info.task_templates = None
-> 1438 dataset = dataset.cast(features=template.features)
1439 return dataset
1440
~/git/datasets/src/datasets/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)
977 format = self.format
978 dataset = self.with_format("arrow")
--> 979 dataset = dataset.map(
980 lambda t: t.cast(schema),
981 batched=True,
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1600
1601 if num_proc is None or num_proc == 1:
-> 1602 return self._map_single(
1603 function=function,
1604 with_indices=with_indices,
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
176 }
177 # apply actual function
--> 178 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
179 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
180 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
395 # Call actual function
396
--> 397 out = func(self, *args, **kwargs)
398
399 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, desc)
1940 ) # Something simpler?
1941 try:
-> 1942 batch = apply_function_on_filtered_inputs(
1943 batch,
1944 indices,
~/git/datasets/src/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
1836 effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset
1837 processed_inputs = (
-> 1838 function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1839 )
1840 if update_data is None:
~/git/datasets/src/datasets/arrow_dataset.py in <lambda>(t)
978 dataset = self.with_format("arrow")
979 dataset = dataset.map(
--> 980 lambda t: t.cast(schema),
981 batched=True,
982 batch_size=batch_size,
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.cast()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.ChunkedArray.cast()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/compute.py in cast(arr, target_type, safe)
241 else:
242 options = CastOptions.unsafe(target_type)
--> 243 return call_function("cast", [arr], options)
244
245
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/_compute.pyx in pyarrow._compute.call_function()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/_compute.pyx in pyarrow._compute.Function.call()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowNotImplementedError: Unsupported cast from struct<answer_end: list<item: int32>, answer_start: list<item: int32>, text: list<item: string>> to struct using function cast_struct
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2434/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2433/comments | https://api.github.com/repos/huggingface/datasets/issues/2433/events | https://github.com/huggingface/datasets/pull/2433 | 907,488,711 | MDExOlB1bGxSZXF1ZXN0NjU4MzI5MDQ4 | 2,433 | Fix DuplicatedKeysError in adversarial_qa | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,622,468,927,000 | 1,622,537,531,000 | 1,622,537,531,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2433",
"html_url": "https://github.com/huggingface/datasets/pull/2433",
"diff_url": "https://github.com/huggingface/datasets/pull/2433.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2433.patch",
"merged_at": 1622537530000
} | Fixes #2431 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2433/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2432 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2432/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2432/comments | https://api.github.com/repos/huggingface/datasets/issues/2432/events | https://github.com/huggingface/datasets/pull/2432 | 907,462,881 | MDExOlB1bGxSZXF1ZXN0NjU4MzA3MTE1 | 2,432 | Fix CI six installation on linux | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,622,466,936,000 | 1,622,467,027,000 | 1,622,467,026,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2432",
"html_url": "https://github.com/huggingface/datasets/pull/2432",
"diff_url": "https://github.com/huggingface/datasets/pull/2432.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2432.patch",
"merged_at": 1622467026000
} | For some reason we end up with this error in the linux CI when running pip install .[tests]
```
pip._vendor.resolvelib.resolvers.InconsistentCandidate: Provided candidate AlreadyInstalledCandidate(six 1.16.0 (/usr/local/lib/python3.6/site-packages)) does not satisfy SpecifierRequirement('six>1.9'), SpecifierRequirement('six>1.9'), SpecifierRequirement('six>=1.11'), SpecifierRequirement('six~=1.15'), SpecifierRequirement('six'), SpecifierRequirement('six>=1.5.2'), SpecifierRequirement('six>=1.9.0'), SpecifierRequirement('six>=1.11.0'), SpecifierRequirement('six'), SpecifierRequirement('six>=1.6.1'), SpecifierRequirement('six>=1.9'), SpecifierRequirement('six>=1.5'), SpecifierRequirement('six<2.0'), SpecifierRequirement('six<2.0'), SpecifierRequirement('six'), SpecifierRequirement('six'), SpecifierRequirement('six~=1.15.0'), SpecifierRequirement('six'), SpecifierRequirement('six<2.0,>=1.6.1'), SpecifierRequirement('six'), SpecifierRequirement('six>=1.5.2'), SpecifierRequirement('six>=1.9.0')
```
example CI failure here:
https://app.circleci.com/pipelines/github/huggingface/datasets/6200/workflows/b64fdec9-f9e6-431c-acd7-e9f2c440c568/jobs/38247
The main version requirement comes from tensorflow: `six~=1.15.0`
So I pinned the six version to this. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2432/timeline | null | null | true |